Step by Step Functional Verification With Systemverilog and Ovm PDF
Step by Step Functional Verification With Systemverilog and Ovm PDF
Functional Verification
with SystemVeriiog
and OVM
Step-by-Step
Functional Verification
with SystemVerilog
and OVM
by
Sasan Iman
SiMantis Inc.
Santa Clara, CA
Spring 2008
Sa san Iman
SiMantis, Inc.
900 Lafayette St. Suite 707
Santa Clara, CA 95050
iman@simantls.com
ISBN-10: 0-9816562-1-8
ISBN-13: 978-0-9816562-1-2
987654321
By now, the metaphor of "the perfect storm" is in danger of becoming a cliche to describe the
forces causing rapid evolution in some aspect of the electronics industry. Nevertheless, the
term is entirely applicable to the current evolution-arguably even a revolution-in func-
tional verification for chip designs. Three converging forces are at work today: complexity,
language, and methodology.
The challenges posed in the verification oftoday's large, complex chips is well known.
Far too many chips do not ship on first silicon due to functional bugs that should have been
caught before tapeout. Hand-written simulation tests are being almost entirely replaced by
constrained-random verification environments using functional coverage metrics to deter-
mine when to tape out. Specification of assertions, constraints, and coverage points has
become an essential part of the development process.
The SystemVerilog language has been a major driver in the adoption of these advanced
verification techniques. SystemVerilog provides constructs for assertions, constraints, and
coverage along with powerful object-oriented capabilities that foster reusable testbenches
and verification components. The broad vendor support and wide industry adoption of Sys-
tem Veri log have directly led to mainstream use of constrained-random, coverage-driven ver-
ification environments.
However, a language alone cannot guarantee successful verification. SystemVerilog is a
huge language with many ways to accomplish similar tasks, and it doesn't directly address
such essential areas as verification planning, common testbench building blocks, and com-
munication between verification components. Such topics require a comprehensive verifica-
tion methodology to tie together the advanced techniques and the features of the language in
a systematic approach.
Fortunately, the Open Verification Methodology (OVM) recently arrived to address this
critical need. Developed by Cadence Design Systems and Mentor Graphics, the OVM is
completely open (freely downloadable from ovmworld.org) and guaranteed to run on the
simulation products from both companies. The OVM leverages many years of verification
experience from many of the world's experts. It was greeted with enormous enthusiasm by
the industry and is used today on countless chip projects.
vi
Thus, the timing of this book could not be better. It provides thorough coverage of all
three forces at work. The complexity challenge is addressed by timely advice on verification
planning and coherent descriptions of advanced verification techniques. Many aspects of the
SystemVerilog language, including its assertion and testbench constructs, are covered in
detail. Finally, this book embraces the OVM as the guide for verification success, providing
a real-world example deploying this methodology.
Functional verification has never been easy, but it has become an overwhelming prob-
lem for many chip development teams. This book should be a great comfort for both design
and verification engineers. Perhaps, like The Hitchhiker s Guide to the Galaxy, it should
have "DON'T PANIC!" on its cover. So grab a beverage of your choice and curl up in a com-
fortable chair to learn how to get started on your toughest verification problems.
Michael McNamara
Past Chairman of the Verilog Standards Committee.
Former VP of Engineering of Chronologic Simulation (creator of VCS).
Currently Vice President and General Manager, Cadence Design Systems.
Spring 2008
Table ofContents
----------
Part 4: Randomization Engine and Data Modeling ............ 235
Chapter 10: Constrained Random Generation ...................... 237
10.1 Random Generators and Constraint Solvers ....................... 237
10.1.1 Constrained Randomization and Variable Ordering Effects ...... 238
10.1.2 Random Generation Engine .............................. 241
10.2 Randomization in SystemVerilog ................................ 243
10.2.1 Random Variables ..................................... 244
10.2.2 Random Dynamic Arrays ................................ 246
10.2.3 Constraint Blocks ...................................... 247
10.3 Constraint-Specific Operators .................................. 249
10.3.1 Set Membership Constraints ............................. 249
10.3.2 Distribution Constraints .................................. 250
10.3.3 Implication Constraints .................................. 250
10.3.4 If-Else Constraints ..................................... 251
10.3.5 Iterative Constraints .................................... 252
10.3.6 Global Constraints ..................................... 253
10.3.7 Variable Ordering Constraints ............................. 254
10.3.8 Function-Call Constraints ................................ 255
10.4 Constraint Guards ........................................... 255
10.5 ContrOlling Constrained Randomization .......................... 257
10.5.1 Controlling Constraints .................................. 257
10.5.2 Disabling Random Variables .............................. 258
10.5.3 Randomization Flow Methods ............................ 259
xii
Functional verification has been a major focus of product development for more than a
decade now. This period has witnessed the introduction of new tools, methodologies, lan-
guages, planning approaches, and management philosophies, all sharply focused on address-
ing this very visible, and increasingly difficult, aspect of product development. Significant
progress has been made during this period, culminating, in recent years, in the emergence
and maturity of best-in-class tools and practices. These maturing technologies not only allow
the functional verification challenge to be addressed today, but also provide a foundation on
which much-needed future innovations will be based. This means that having a deep under-
standing of, and hands-on skills in applying, these maturing technologies is mandatory for all
engineers and technologists whose task is to address the current and future functional verifi-
cation challenges.
A hallmark of maturing technologies is the emergence of multi-vendor supported and
standardized verification languages and libraries. The SystemVerilog hardware design and
verification language (IEEE standard 1800), and the SystemVerilog-based Open Verification
Methodology (OVM) provide a powerful solution for addressing the functional verification
challenge. SystemVerilog is an extension of Verilog-2005 (IEEE Standard 1364-2005), and
enhances features of Verilog by introducing new data types, constrained randomization,
object-oriented programming, assertion constructs, and coverage constructs. OVM, in tum,
provides the methodology and the class library that enable the implementation of a verifica-
tion environment according to best-in-class verification practices.
This book is intended for a wide range of readers. It can be used to learn functional ver-
ification methodology, the SystemVerilog language, and the OVM class library and its meth-
odology. This book can also be used as a step-by-step guide for implementing a verification
environment. In addition, the source code for the full implementation of the XBar verifica-
tion environment can be used as a template for starting a new project. As such, this book can
be used by engineers starting to learn the SystemVerilog language concepts and syntax, as
well as advanced readers looking to achieve better verification quality in their next verifica-
tion project. This book can also be used as a reference for the SystemVerilog language and
the OVM class library. All examples provided in this book are fully compliant with System-
Verilog IEEE 1800 standard and should compile and run on any IEEE 1800-compliant simu-
lator.
xviii
Acknowledgements
The creation of this book would not have been possible without the generous support of
many individuals. I am especially grateful to David Tokic and Luis Morales for helping tum
this book from a nascent idea into a viable target, to Susan Peterson for getting this project
off the ground and for her infectious positive energy, and to Tom Anderson for his continued
technical and logistical guidance and support throughout the life of this effort. Special thanks
also go to Sarah Cooper Lundell and Adam Sherer for valuable planning and technical dis-
cussions, and to Ben Kauffman, the technical editor.
The examples included in this book were developed and verified using the Incisive
Functional Verification Platform® developed by Cadence Design Systems, and obtained
through Cadence's Verification Alliance program. I would like to thank Cadence Design
Systems and the Verification Alliance program for their generous support of this effort.
The technical content of this book has benefited greatly from feedback by great engi-
neers and technologists. Special thanks go to David Pena and Zeev Kirshenbaum for
itt-depth discussions on many parts of this book. In addition, technical feedback and discus-
sions by individuals from a diverse set of companies have contributed significantly to
improving the technical content of this book. I am especially grateful to these individuals
whose nan:les and affiliations are listed below.
Sasan Iman
Santa Clara. CA
Spring 2008
System Verilog
To learn more about the System Veri log language, and to keep up with the latest updates to
the SystemVeriiog LRM, visit:
http://www.SystemVerilog.org/
http://www.EDA-stds.org/sv/
OVM
This book is based on OVM release 1.0.1. To download the latest version of the OVM class
library, to participate in OVM user forums, to learn about the latest related news and semi-
nars, and to contribute to the OVM community, visit:
http://www.OVMWorld.orgt
Feedback
We welcome your feedback on this book. Please email your feedback to:
fvsvovm@simantis.com
xx
Book Structure
This book provides a complete guide and reference for functional verification methodology,
learning the System Verilog language, and for building a verification environment using Sys-
temVerilog and the OVM class library. Given the range of material covered in this book, it is
expected that the focus of anyone reader may be on one specific topic, or that anyone reader
may prefer to bypass familiar content. To better support the range of readers who can benefit
from this book, its content is grouped into parts. These parts are ordered so that the prerequi-
site knowledge required for any topic is covered in the parts appearing before that topic. This
means that this book can be studied in a linear fashion, but the clear breakdown of topics into
these parts facilitates selective focus on anyone topic. This book consists of the following
parts:
• Part 1: Verification Methodologies, Planning, and Architecture
This part focuses on the functional verification problem, its relation to the product
development flow, challenges that it raises, available tools, and metrics that are
used for evaluating the effectiveness of a functional verification approach (chapter
I). This part also provides a detailed description of verification planning for a cov-
erage-driven verification flow (chapter 2). The architectural view of a verification
environment is also described in this part of the book (chapter 3).
The discussion in this part of the book is implementation independent, and pro-
vides the background that is necessary before a verification solution can be imple-
mented.
• Part 2: All about SystemVerilog
This part provides a detailed introduction to the SystemVerilog language by
describing its programming (chapter 4) and verification related features (chapter
5). This part can be used to learn the syntax and semantics of the SystemVerilog
language, and to also learn its veri fication related features. In addition to chapter 4,
randomization features are described in chapter 10, assertion features are described
in chapter 15, and coverage features are described in chapter 17. The content in
this part is organized through tables and examples so it can also be used as a desk
reference for the SystemVerilog language.
xxii
PART 1
Verification Methodologies,
Planning, and Architecture
2
CHAPTER 1 Verification Tools and
Methodologies
A functional verification project plays against the backdrop of a product development flow.
The following subsections describe the product design flow, the meaning of functional verifi-
cation and challenges that must be met to successfully and efficiently execute a verification
project.
4 Verification Tools and Methodologies
O Design Intent
(Descriplive Text)
Product cannot
be implemented
-----------~
/O---]-F-Un-c-tio-n-a-ls-p-eC-ifi-lc-at-io-n'
Product and System
Engineers
~D~'" ",'"M.
0_-- DeSign Implementation
(HDl: Verilog, VHDl)
~ Implementation targets
cannot be met.
Synthesis &
Back-end Tools
Product development flows are as varied as the products they produce. Most flows,
however, can be described in terms of abstract phases corresponding to product idea devel-
opment and design and implementation stages. Figure l.1 shows an overview of one such
design flow, from intent to final product. A product is usually scoped by a marketing team as
an opportunity to satisfy a demand in the marketplace. The initial description of a product
must be described with careful consideration of the overall abilities and limitations of the
underlying technologies. This consideration is essential for delivering the product with good
confidence, on target, and with the desired functionality. The initial product intent is turned
into a functional description through discussions between the marketing team and product
and system engineers. These discussions are geared towards solidifying the general features
of the product (e.g., user level features available to consumer, the number and types of
required interfaces) and identifying the architecture that is best suited for delivering the
desired functionality. An important part of this architectural exploration stage is to perform
analysis sufficient for confirming that the required features are practical and can be sup-
ported by the suggested architecture (e.g., that the proposed bandwidth of the internal bus of
a multi-interface device can support the expected traffic between all interfaces).
Transaction le\'el models (TLtvI) are used at the architectural level of abstraction to
model the blocks identified in the early stages of architectural exploration and analysis. In
Setting the Stage 5
general, tr~nsaction level model~ allow designers to specify bl.ock behaviors at a high level
of abstractIOn where the focus IS on. the system .level behavIor of blocks and interaction
between blocks, and not on low-levellmplementahon and cycle accurate behaviors.
Once the architectural model of a design is finalized, design engineers take this fu _
tion.al specification a~d crea~e an impl~men~ati~n in a target lang~age (e.g., Verilo g, Syst:-
Venlog, VHDL). ThIs functIOnal specIficatIOn IS then translated mto a final implementati.Q
through a series of steps in which appropriate tools and techniques are used t(} create t~
design implementation. Once the design implementation is produced, the final prOduct is cre-
ated by using the synthesis and back-end tools to build the chips and boards that will com-
prise the final product.
Transaction level models play an important role in the design and verification process in
that they provide an early model of the design that can be used for architectural evaluation
and whose verification must be considered in the verification flow. An overview oftransac_
tion level models and the design flow is described in the following subsections.
OSCI I has standardized an API that describes predefined transactions that can be used
for TLM based interactions. These standard transaction types can be modeled in SystemVer-
ilog and for inter-module communication. The SystemVerilog implementation of this API is
described in chapter 9.
Figure 1.2 shows details of a typical flow used by designers to turn a functional specifi-
cation into a design implementation. This flow consists of the following steps:
• Architectural design
• Block design
1. The ideas and general guidelines of transaction level modeling have been standardized t>y the Open Sys-
terne Initiative (OSCII (http:www.systemc.org) who at the time of this writing. provides TL\I 2.0 as a
standard API for developing transaction le\'el models using SystemC.
Setting the Stage 7
• Module design
• Chip/System design
The very first step is to create the design architecture. This step identifies the individual
modules and blocks in the system and describes the functionality for each block. Only after
this architecture is decided can the design of these individual blocks get started. The next
step is to design the individual blocks identified during the architectural design stage. Blocks
are implemented as RTL descriptions, and then grouped together to form design modules.
Modules may optionally contain blocks described at the transaction level that model
non-digital parts of the system (e.g., analog parts, RF blocks, etc.). Modules are combined to
form the full-chip design, which is then combined to form the implementation description of
the entire system.
Block Design
~ RTL Description
Module Design
~ +
~
Chip/System Design
~ +
~
Figure 1.2 Design Implementation Flow
This breakdown of the design implementation flow hints at the way verification activity
dovetails with design activity. The interaction between design and verification flows is
described in more detail in the next section.
a netlist of gates. This step is perfonned by using synthesis and physical design tools which
are less prone to errors because of their high degree of automation and tool maturity.
The main source of functional errors in a design can be attributed to the following:
• Ambiguities in product intent
• Ambiguities in the functional specification
• Misunderstandings by designers even when the specification is clear
• Implementation errors by designers even when the understanding is correct
The primary goal offunctional verification is to verify that the initial design implemen-
tation is functionally equivalent to product intent. Or alternatively, proving the convergence
of product intent, functional specification, and design implementation.
Figure 1.3 shows a pictorial view of this concept. The process of functional verification
facilitates the convergence of product intent, functional specification, and design implemen-
tation, by identifying any differences between the three and giving the system and design
engineers the opportunity to eliminate this difference by appropriately modifying one, two,
or all three descriptions so the difference is eliminated. Verification closure is gained when
this convergence is proved with a high degree of confidence .
..
Functional
Verification
model are then checked to be equivalent within the abstraction level (e.g., transaction accu-
rate, instruction accurate, cycle accurate, etc.).
Black-Box Verification
Gray-Box Verification
White-Box Verification
Even when an error is detected on the output, it is usually very difficult to trace the problem
to its root cause.
An added difficulty in using black-box verification is that it requires a reference model
implemented with enough accuracy to detect any and all internal bugs. Given that a model
with such strict accuracy requirements (e.g., cycle accurate) may not be available or perhaps
as difficult to build as the design itself, it is not always possible to rely completely on this
verification approach.
White-box and gray-box verification provide alternative approaches for addressing the
limitations of black-box verification. In white-box verification, no reference model is
needed, because correct design operation is verified by placing monitors and assertions on
internal and output signals of the DUV. Grav-box verification is a combination of white-box
and black-box verification approaches, where monitors and assertions on internal design sig-
nals are used along with a reference model. The use of monitors and assertions reduces the
accuracy requirements of the reference model and also reduces debugging effort when bugs
are found. These architecture used for white-box and gray-box verification approaches are
also shown in figure 1.4.
Table 1.1 lists challenges inherent in completing a verification project, and also lists the
degree of effort required to address these challenges when each of the verification
approaches discussed in this section are used. As shown, gray-box verification provides the
best balance of effort required to address these different challenges. As such, gray-box verifi-
cation is the approach intuitively chosen most often by verification engineers to carry out
their tasks. It should be emphasized that there are many shades of gray in gray-box verifica-
tion, where these shades refer to the balance of effort dedicated to reference model develop-
ment and monitor/assertion development. The exact shade of gray-box verification will
ultimately depend on the specific requirements of a verification project and the verification
engineer's experience from previous projects.
Verification Approach
Verification Challenges
....
=
.= ..=
~
~ !XI
..:t ;:.,
" :a ..."
I iii" I ~ ~
Table 1.1: Verification Approaches and Effort Needed to Create Verification Implementation
previous projects. An effective solution to meeting this increased demand for achieving veri-
fication closure must address the following verification challenges:
• Completeness
• Reusability
• Efficiency
• Productivity
• Code performance
The challenge in verification completeness is to maximize the part of design behavior
that is verified. The major challenge in improving verification completeness is in capturing
all of the scenarios that must be verified. This, however, is a manual, error-prone, and omis-
sion-prone process. Significant improvements in this area have been made by moving to cov-
erage-driven verification methodologies. Coverage-driven verification approaches require a
quantitative measure of completeness whose calculation requires strict planning, tracking,
and organization of the verification plan. This strict requirement on verification plans natu-
rally leads to exposing the relevant scenarios that may be missing. Fine-tuned verification
planning and management methods have been developed to help with the planning and track-
ing of verification plans.
The challenge in verification reusability is to increase portions of the verification envi-
ronment infrastructure that can be reused in the next generations of the same project or in a
completely different project, sharing features that are similar with those in the current proj-
ect. A high degree of reuse can be achieved for standardized interfaces or functions. Blocks
can be reused in any project that make use of the same standardized interface. Beyond stan-
dardized interfaces, identifying common functionality in the verification environment plan-
ning stage can lead to further reuse of verification infrastructure.
The challenge in verification efficiencv is to minimize the amount of manual effort
required for completing a verification project. Clearly, manual efforts are error-prone, omis-
sion-prone, and time consuming. In contrast, automated systems can complete a significant
amount of work in a short time. Automated systems must, however, be built manually. As
such, improvements in efficiency must be made through careful analysis of the trade-off
between the extra effort required for building an automated system and the gains it affords.
Coverage-driven pseudo-random verification methodology is an example of a methodology
where making the effort to build an automated system for stimulus generation and automated
checking leads to significant improvements in verification efficiency, and hence productivity.
An important consideration in deciding the feasibility of building an automated system is
that such automation requires a consistent infrastructure on which it can be developed, and
also imposes a use model on how engineers interact with it. As such, deployment of an auto-
mated system requires consistency both in infrastructure and engineering approach, both of
which take time and targeted effort to achieve.
The challenge in verification productivity is to maximize work produced manually by
verification engineers in a given amount of time (e.g., number of failed scenarios debugged,
verification environment blocks implemented, etc.). Achieving higher productivity has
become a major challenge in functional verification. Significant improvements in the design
flow have afforded design engineers with much higher productiyity. Improvements in verifi-
cation productivity have, however, lagged those on the design side, making functional verifi-
cation the bottleneck in completing a design. Effective functional verification requires that
12 Verification Tools and Methodologies
this productivity gap between design and verification be closed. Productivity gains in verifi-
cation can be obtained by moving to higher levels of abstraction and leveraging reuse con-
cepts.
The challenge in verification code performance is to maximize the efficiency of verifi-
cation programs. This consideration is in contrast with verification productivity, which deals
with how efficiently verification engineers build the verification environment and verify the
verification plan. The time spent on a verification project is usually dominated by the manual
work performed by verification engineers. As such, verification performance has usually
been a secondary consideration in designing and building verification environments. An
important area in which verification performance becomes a primary consideration is in run-
ning regression suites where the turnaround times are dominated by how efficiently verifica-
tion programs operate. Expert knowledge of tools and languages used for implementing the
environment is a mandatory requirement for improving verification performance.
The methodologies and topics discussed in this book aim to address these verification
challenges through the following topics:
• Best-in-class verification methodoiogies
• Improved verification planning processes
• Best use of System Verilog features and constructs
• Reuse considerations
1.2.1 Granularity
/'erification granularity is a qualitative measure of the amount of detail that must be speci-
fied in describing and implementing a verification scenario. Granularity directly affects veri-
Verification Metrics 13
fication productivity by considering the etTort required by verification engineers to deal with
verification objects.
Verification granularity is ditTerent from design granularity. Design granularity refers to
the abstraction level used at ditTerent stages of the design flow. For example, a transaction
level model is described in terms of transactions and, therefore, verification scenarios
described for such a model are also described at the transaction level. Verification granular-
ity refers to using a higher level granularity than design granularity in describing verification
scenarios. For example, for a USB port design described at the register transfer level (Le., in
terms of registers, combinational operators, and wires and buses), a verification scenario can
be described in terms of sending a bulk transfer instead of specifying the individual signals
that must be assigned in doing such a transfer. Describing scenarios at a higher level of
abstraction than the design abstraction requires an automated method of translating a higher
level statement (e.g., sending a bulk transfer) into signal activity at the USB port. In a verifi-
cation environment, a driver (section 3.2.2) is used to facilitate this type of translation.
Verification granularity can be increased by:
• Describing scenarios at a higher level of abstraction
• Implementing scenarios using higher level language constructs
By describing scenarios at a higher level of abstraction, a verification engineer is able to
focus on verification scenarios and not on low-level details of how that scenario is carried
out.
A verification engineer can produce and debug only a limited amount of verification
code in a day. Allowing verification engineers to implement a feature using higher level lan-
guage constructs leads directly to higher productivity in building the verification environ-
ment. System Veri log provides a number of language constructs (e.g., randomization,
coverage collection, property specification) aimed directly at allowing verification engineers
to describe higher level intent with fewer lines of code.
good strategy to follow. Note that such a trade-off may not exist for a very small verification
project where the manual effort for building an automated system is greater than the effort
required for completing the verification using directed-tests. In general, a trade-off analysis
should be done to decide how much and what parts ofthe verification environment should be
automated. Given the complexity of today's projects, more automation generally leads to
less overall manual effort and, therefore, more productivity.
1.2.3 Effectiveness
Executing a verification plan usually consists of multiple simulation runs, each consisting of
different verification scenarios. In general, not every simulation cycle contributes to verify-
ing a new scenario and not all simulation runs execute unique scenarios. Verification effec-
tiveness is a measure of how much of the simulation time contributes directly to covering
more scenarios. All simulation runs that only verify previously verified scenarios should be
removed from the simulation regression.
1.2.4 Completeness
Verification completeness is a measure of how much of the relevant design functionality is
verified. Verification plan completeness refers to how much of the relevant functionality of
the design is included in the verification plan. Ideally, a verification plan should be complete
in that it should include all relevant scenarios. Verification plans are, however, rarely com-
plete, since it is not possible to enumerate all corner cases of a complex design. Verifications
plans may also be incomplete because of poor verification management processes or lack of
time.
checked using a fonnal verification tool. These blocks are then combined to fonn design
modules. At this point, the module testbench developed by the verification engineers is used
by the design engineers to verify design modules. Modules are then combined by design
engineers to create the complete system. At this time, the system testbench created by the
verification engineers is used to verify the system level design. The system is modeled by the
system engineers at the architectural levels and using transaction level models. Architecture
verification continues by system engineers as work progresses on module and system test-
bench development where these testbenches are used by system engineers to further verify
architectural assumptions and perfonnance.
I Verification PlanninQ
Interface Verification
Component (SV)
Technologies available for performing functional verification fall into three categories:
• Fonnal verification
• Simulation-based verification
• Acceleration/Emulation-based verification
The following sections provide an overview of these approaches.
Verification Technologies 17
~. A Canonical model ofa Boolean function is a representation of that function that is unique under a
given ordering of its variables. A Binary Decision Diagram is one such canonical representation for Bool-
ean functions and is used extensively in tools for formal verification of digital systems.
18 Verification Tools and Methodologies
3, A Temporal Logic represents a system of rules and symbols for representing and reasoning about rela-
tionships between logical variables in terms of time. For example, a temporal logic expression can state the
~,xpected relationship between two A and B \'ariables in different clock cycles.
Verification Technologies 19
A I
~c A I -TI-c
D Q
CLK CLK
• Properties indirectly define coverage collection metrics that will be needed to check
verification progress.
Multiple languages have been developed to facilitate effective and powerful property
descriptions. Property Specification Language (PSL) is one such standard language. System-
Verilog also provides its own syntax for defining properties in the form of assertions.
Figure 1.7 shows a representation of this process. The total state space of the design in
shown in the figure. A simulation run starts at an initial state where the next state is identified
by the current state and input values to the design. A simulation trace corresponds to the set
of design values observed as a given path in the state space is traversed. A traversed path in
the state space is identified by the verification scenario carried out by the simulator.
SystemVerilog provides a comprehensive set of constructs for modeling both the design
and the verification environment. As such, both the design and the verification environment
can be implemented completely using SystemVerilog. It is, however, important to note that
in gcneral, the distinction between a testbench and a design is a logical distinction, and the
simulator is oblivious to the boundary between the code implementing the design and the
code implementing the testbench. The equal treatment of design and testbench programs by
the simulator can lead to race conditions between the design and the testbench. As such, Sys-
temVerilog provides explicit means of separating the testbench from the design by introduc-
ing a program block (section 5.3).
are ideal for early stages of the design where small blocks are being created and no ve 'fi
tion environment is yet available. Assertions defined at this stage are usually rela~~~c:
design decisions. 0
Figure 1.8 shows stages of the design process and the tools that are used at each stage to
verify assertions. During block level design, assertions are specified by designers to identify
properties that must be maintained because of design implementation decisions and assump-
tions made about the boundary conditions of blocks. At this stage of design, there is usually
very little stimulus generation capability available, and blocks are small. As such, this stage
is ideally suited for using formal verification methods. In the next phase, blocks are grouped
to create modules. Assertions defined by block designers are carried along with the block to
this stage and aid in making sure that expected boundary conditions for each block are satis-
fied when blocks are connected, and that properties related to design decisions are main-
tained. Any failure of block level assertions aids in debugging by quickly pointing to the
source of the unsatisfied property. Additional assertions are added to the design when mod-
ules are created. These assertions relate to the end-to-end properties of the module. Modules,
along with their assertions, are then combined to form a system. At this level, new
24 Verification Tools and Methodologies
end-to-end assertions are added and assertions inherited from lower design levels (i.e., block
and module) are leveraged to identify and quickly locate the source of any problems.
DUV
Block B1
----. ,-__D_r_iV_9_r--,
;' _Euv
0010101
USB
________ --l
100%
Random Generation
Time
Figure 1.11 Directed-Test-Based Verification vs. Random Generation
Verification Methodologies 27
An added advantage of a randomized environment is that scenarios that were not speci-
fied in the verification plan may also be generated because of the combination of random
activity across the verification environment. As such, a random generation environment not
only improves verification productivity, but also helps with verification completeness, since
given extra time or computing cycles, further running of a randomized environment leads to
more unspecified scenarios to be generated.
A randomized environment is not completely random, since generated data and parame-
ters must remain within the legal set of values. Also, random scenario generation requires
that each simulation nm eventually be guided towards scenarios that are not yet generated.
As such, constraint definition is an important part of random generation utilities.
Building a randomly generated environment is not a trivial task and requires significant
changes over traditional directed-test verification techniques. Random generation of veri fica-
tion scenarios requires each randomly generated scenario to be automatically verified. As
such, automatic checking must be an integral part of any randomly generated environment.
This means that a randomly generated environment requires a reference model so that the
result of each randomly generated scenario can be predicted at simulation runtime. In addi-
tion, a randomized verification environment should be able to handle all types of behaviors
that may be generated by the random generation of scenarios. These behaviors include spe-
cial handling of error conditions in addition to normal operation of the design. Random gen-
eration of scenarios also means that verification progress must be measured automatically
through coverage collection mechanisms. This is because the exact scenario being generated
is not known before a simulation run is started. And different simulation runs ofthe same test
may create different scenarios and data combinations. In summary, a verification environ-
ment that contains random stimulus generation must also be a self checking environment and
must also include coverage collection.
Random stimulus generation is ideally suited to verifying a finite state machine. As an
example, random generation, automatic checking, coverage collection, and coverage analy-
sis for an FSM can be summarized as follows:
• Random generation:
• Initially, put the state machine in a random legal state.
• Generate random inputs.
• Automatic checking
• Check that the next state produced by simulation is the same as that specified.
• Check that the output produced by simulation is the same as that specified.
• Coverage collection
• Collect all states reached.
• Collect state transitions.
• Collect inputs applied in each state.
• Analysis
• Check that all valid states have been reached.
• Check that all state transitions have been made.
It is tempting to try to build a single verification environment that randomly generates
all possible verification scenarios. For a number of reasons. however. this is not a practical
goal for real designs. These reasons include:
28 Verification Tools and Methodologies
• A design usually has multiple modes and use models that require different verifica-
tion environment topology and configuration.
• Some comer cases have an extremely low probability of occurring under normal
design operations, so a special verification environment is required for generating
such scenarios.
• Running a single environment and a single test for a long time leads to many dupli-
cated scenarios.
Even the best designed verification environment architecture and generator is limited in
the types of scenarios that it can generate because the effort required for adding the missing
scenarios to the same environment is prohibitively expensive. Additionally, given the num-
ber of parameters and states in a large design, it is practically impossible to reach all comer
cases using a single randomized test and a single environment. In such cases, it is best to cre-
ate a specially constrained version of the environment and/or the randomized testcase
focused on the missing scenarios so that these scenarios occur with high likelihood.
Because of these limitations in covering all scenarios using a single test, a randomized
verification environment will usually have a number of randomized testcases that cover a
large part of the verification plan. The next set of randomized testcases will cover a smaller
set of scenarios, and ultimately, some testcases will be direct testcases that focus on only one
or two speCific scenarios.
Coverage results for each simulation run can also be used to rank testcases for the pur-
pose of regression suite creation or ordering. Testcases with high contribution to overall
progress take priority in the regression environment.
stage is completed when the remaining scenarios must be considered individually and
through highly customized environments or testcases.
In stage five, the remaining verification scenarios are generated through directed tes-
tcases. As mentioned previously, the goal should be to minimize the number of scenarios
requiring a customized test.
Building Running
the Random Directed test '
: Bringup: Testcases , for corner cases:
Environment , ,
100% .
~--=~~..!.-~----~----
,
Time
Figure 1.12 Coverage-Driven Verification Methodology Stages
The tenns testplan and verification plan have historically been used interchangeably to refer
to a list of scenarios that must be verified in order to complete a verification project. But
given the increasing complexity of designs, and the increasing number of abstractions,
actors, and technologies involved, the tenn verification plan is increasingly used to describe
the planning that goes into completing a verification project.
This chapter describes the motivation and the need for creating holistic verification
plans that act as the centerpiece and driving point of a verification project. This chapter also
describes the flow for creating such a verification plan. This flow takes into consideration
factors such as who must be involved in the creation of this plan, what content should be
included, what format should be used, how this content is used to guide the design of the ver-
ification and coverage plan, and finally, how it should be used to iteratively improve the ver-
ification plan and arrive at verification closure.
The verification planning methodology described in this chapter promotes a more elab-
orate up-front planning process that requires involvement by all decision makers in the
design and verification flow, and one whose result is intended to drive and manage verifica-
tion activity across the whole project life cycle. In this context, verification project manage-
ment refers to guiding a verification project through all of its distinct phases, including
planning, implementation of verification infrastructure, measuring verification progress in a
quantitative manner, and directions for how to react to these measured metrics. Additionally,
verification management includes guidance on resource allocation, anticipating project slips
and reporting progress in reference to given mi lestones.
In addition to acting as a project management baseline, the comprehensive nature ofthis
upfront planning process leads to further benefits in improving other aspects of product
development flow. These benefits include:
• Alignment of product functionality interpretations by the various actors
• Development of a common description of the desired functionality
• Capturing andlor definition of project milestones
• Definition of the engineering view of the plan, assigning resources to features
• Common understanding of project tasks, goals, and complexity by all actors
• Refinement of existing specifications (concept and design) due to identification of
holes, ambiguities, or misinterpretations
• Definition of verification approaches to tackle all, or a subset, of product features
Effective verification planning requires careful consideration of all factors that impact
this planning process or are affected by it. These factors include:
• Work of multiple specialists has to be tracked and merged
• Thousands of tests have to be managed and processed
• Multiple verification technologies have to be managed in the flow
• Planning changes, being costly in both time and resources, have to be minimized
• Multiple teams at multiple sites have to be coordinated
• Status must be collected and reported from across the design flow
• The inherent inefficiencies and unpredictability in project resources (tools and engi-
neers) must be managed
A verification plan is the natural place to centralize verification related procedures and
management related activities. In this sense, the verification plan should be developed with
the end in mind and as a document that is able to guide the verification project throughout its
lifetime. Verification planning challenges, goals, content, and life cycle are discussed in the
following subsections.
Use-cases and system level scenarios refer to how software programmers and system
users view the design. Under ideal conditions where 100% of device features are verified,
this type of use-case analysis and verification is not necessary, but knowing that 100% of
device features cannot even be enumerated, let alone verified, it becomes necessary to verify
these larger scale use-cases.
Milestones should also be included in the verification plan so that a timeline is given for
verifying different features. This description of milestones will be needed in identifying proj-
ect slippages and how resources can be shifted to remedy the situation.
l
~__R_e,a~c_t__~
Build Plan
Refine Plan
~--------~
Measure Execute
updates, and schedules, obtaining this strong footing is not possible without the creation of a
solid verification plan.
Figure 2.2 shows the flow for creating a verification plan and the coverage plan derived
from that verification. The following steps must be completed when building a verification
plan:
• Identify all actors concerned with project execution
• Prepare for the planning sessions and planning document
• Brainstorm the product functionality
• Structure the verification plan, keeping in mind the necessary views and reuse
• Capture features and attributes
• Formulate the verification environments and coverage implementations
These steps are described in the following subsections.
• Verification engineers
• Design engineers
Project managers will ultimately use the verification plan for tracking and resource allo-
cation. As such, their involvement in specifying project deadlines and milestones is needed.
Architects must confirm that product intent is indeed realized in the functional specification
and design implementation (e.g., latency, bandwidth targets are met, FIFO size, arbitration
algorithms implemented as specified). Software and firmware engineers must comment on
their view of the hardware and the support they need to verify the software even before the
final hardware is ready. Verification engineers are ultimately responsible for the quality of
sign-off and as such, are owners of the verification plan. Design engineers must comment on
implementation specific verification requirements. Table 2.1 shows a summary of verifica-
tion planning participants, input each is expected to provide, and benefits each will gain.
Even though the completed verification plan will require feedback from all project
actors, initial meetings should be limited to those who can define the structure of the verifi-
cation plan and top level features that must be verified. This initial limit is necessary, since
reaching consensus becomes more difficult as the number of participants grows. As such, it
is better to create the first version of the verification plan with only the most relevant stake-
holders present, and then follow up in other meetings where everyone can comment and
extend the content of the verification plan. One example would be to first make a verification
plan that contains only the product intent, and then complement this plan by bringing in
hardware and software engineers, and project managers.
Verification plan format and structure and naming conventions ultimately will be
decided by the actual contents of the verification plan. It is, however, useful for the verifica-
tion plan owner to finalize on a general format and prepare templates that can be used as the
starting point.
from the functional specification of the design. Design requirements include features that
relate to how the design is implemented.
1. Introduction
2. Functional Requirements
2.1 Functional Interfaces
2.2 Core Features
3. Design Requirements
3.1 Design Interfaces
3.2 Design Cores
4. Verification Views
5. Verification Environment Design
Verification environment design relates to what stimulus and checking is required for
carrying out the features listed in this verification plan.
Verification Plan
f1
f1.1
f2
f2.1
F.!.2
f3
f4
f4.1
Isub A.1 I Isub A.211 sub A.31 Isub B.1 II sub B.21
I module A I I module B I
As such, it is important to decide up front what views of the verification plan will be
needed by different actors. This information will be useful in that it highlights any missing
information that should be included in the verification plan.
40 Verification Planning
Figure 2.4 shows a verification plan view that correlates each feature with modules in
the design. The organization in this figure shows the verification plan organized by features,
where each feature contains sub-features. It is possible to organize the verification plan so
that features are listed by the module that implements their functionality. A feature based
view is recommended since it allows features to be verified without the main focus being on
what module contains the logic for these features.
The running of the simulation is usually automated by using scripts that perform all of
the necessary tasks required for starting the simulation. Given the complexity of verification
Building the Verification Plan 41
environments consisting of different tools and languages, there are usually a number of dif-
ferent such scripts, each with different configuration switches. A verification plan should
clearly describe methods available for building and running the verification environment.
Consequently, each scenario should clearly indicate the build and run mechanism used for
that scenario.
The inclusion of build and run procedures in the verification plan provides two benefits.
First, it helps guide new engineers joining the project on how to get started with the verifica-
tion activity. And second, it helps as a central location for documenting the build and run
infrastructure for the verification environment.
Attributes defined by design pin values consist of design attributes that are controlled
by design pin values. Examples include command value for a bus interaction, the reset pin
value, and interrupt pins being active or inactive. Attributes that are defined by data struc-
tures passing through design pins include design attributes that are extracted from such data
structures. For example, the destination address of an Ethernet packet arriving on a serial pin
of an Ethernet module and the type of memory access requested in a PCI-express transfer.
Drivers used for verification purposes have two specific purposes: first they must sup-
port interactions defined by the given interface, and second, they must be able to inject errors
that the design is expected to detect or recover from. As such, the scenarios described in the
verification plan should clearly indicate what error conditions should be injected at the
design interface which should necessarily be supported by the driver. Additionally, not all
modes of a design interface may be needed in a project. As such, the implementation of a
driver can be simplified by excluding such unused modes of operation.
Scenario generators are developed to directly support the generation of scenarios listed
in the verification plan. As such, rhe examination of the verification plan should immediately
lead to the types of sequences that must be generated in the verification environment.
Measuring Progress 43
Progress of a verification project is defined as the portion of features that are already covered
compared to featured that must be covered for the current target milestone. Metrics to collect
for this purpose are:
• Code coverage
• Assertion coverage (for formal tools)
• Assertion coverage (for simulation/acceleration/emulation tools)
• Functional coverage
Code coverage is collected by the simulation tool. Assertion coverage is collected from
the results of formal verification used most often by designers during module and block
design. Assertion coverage is also collected during simulation and acceleration/emula-
tion-based verification, and is automatically collected once the assertions are built into the
environment. Functional coverage is collected during simulation-based verification and the
necessary infrastructure must be developed in the verification environment to collect this
information.
Milestone charting is used to measure milestone definitions (features that must be com-
pleted to reach specific milestones) against actual timelines. This analysis is used to adjust
resources and anticipate slippages, based on the amount of progress being made.
Simulation failure analysis is used to debug, identify, and fix problems that have led to a
failed scenario. The failure may be in either the verification environment or the DUV. As
such, this analysis leads both to maturity of the verification environment as well as design
stability and correctness.
Coverage hole analysis provides information about what scenarios have not yet been
exercised. This information is used in milestone charting to understand how the verification
environment must be extended in order to fill the coverage holes that must be met by the
given milestones.
Regression ranking is used to identify verification runs in the regression suite that con-
tribute the most to the coverage metrics. This ranking allows a few regression runs to be
selected that lead to most confidence about the design after any modifications. It also allows
designers and verification engineers to choose simulation runs that best target features about
which they are concerned in their current work.
CHAPTER 3 Verification
Environment
Architecture
The architecture of a verification environment should allow all scenarios in the verification
plan to be verified according to the guidelines of the target verification methodology. In gen-
eral, this target can be achieved through different verification environment architectures.
However, experience shows that as these seemingly different architectures mature into their
final form, common structures and features start to take shape. Advance knowledge of these
best-in-class structures and features is essential for building a verification environment that
can handle today's complex designs.
This chapter describes an overview of a verification environment architecture that facil-
itates the application of coverage-driven and assertion-based verification methodologies. The
discussion in this chapter is focused on the architectural blocks of the verification environ-
ment, and the general use model and features that should be supported by each block. This
discussion is independent of any implementation tool or language and describes each block
at an abstract level.
A verification environment connects to a DUV through that DUV's boundary signals. These
boundary signals can be grouped into interfaces, with each port representing interrelated sig-
nals that collectively describe an interface protocol supported by the DUV. This conceptual
grouping of signals into interfaces can be applied to any size design, from a simple block to
complex systems, with ports representing complex protocols such as a parallel bus (e.g.,
PCI) or a serial interface (e.g., PCI-Express), or even trivial behaviors such as a single wire
used for resetting the device. The discussion in this chapter views a DUV as a block with a
number of abstract interfaces.
This interface-based "ie\\' of a DUV suggests a layered architecture for its verification
em"ironment. In this architecture, shown in figure 3.1, the lowest layer components interact
directly with DUV interfaces, while each higher layer component deals with increasingly
higher Je\els of \erification abstraction. These higher levels of abstraction correspond to
46 Verification Environment Architecture
more complex verification scenarios composed of more complex atomic objects. This lay-
ered architecture leads to an intuitive path for extending a verification environment, as the
design flow moves from block to system level. In this flow, additional verification compo-
nent layers are added to the verification environment as design integration moves to the next
-
step.
Verification Environment DUV
Software
Stack
Hardware
• Passive mode
An active verification component generates traffic for lower layer verification compo-
nents, or at DUV ports in the case of interface verification components. A passive verifica-
tion component monitors only verification environment traffic and as such, does not contain
any stimulus generation capability. Reuse of a verification component when moving to the
next design integration step depends on correct implementation of these modes. As such,
careful attention should be paid to features that should be supported in each mode. These fea-
tures are discussed in the following sections.
Interface, software, and system verification components are described in the following
sections.
t
o
c..
rno
'61
o Sequencer
....J
the creation, connection to DUV port, and management of one or more agents. Each agent
contains the following blocks:
• Driver
• Sequencer
• Agent and bus monitors
The driver block interacts with DUV ports to create the abstraction provided by the ver-
itication component. This block also provides additional verification related features which
are usually not defined as part of the protocol supported by the DUV port (e.g., injecting
errors). The agent monitor block provides the information that is required by protocol
checker blocks to verify that DUV port properties are satisfied during simulation runtime.
The information detected by a monitor block is also used by sequencer blocks when generat-
ing reactive sequences. The properties checked by this block are protocol related properties
that do not require global view and can be checked by observing only local port signals. Cov-
erage collector records information about the types of activities that are observed at DUV
ports. The bus monitor tracks activities that are common across all agents placed inside an
interface verification component. As an example, in a bus type connection where all agents
connect to the same DUV bus, most of the monitoring is done in the bus monitor, while in a
point-to-point type connection where each agent is connected to a separate DUV interface,
most of the monitoring is done in the agent monitor. The sequencer block is used to generate
multi-transaction behaviors at the DUV port where each transaction is defined at the level of
abstraction provided by the driver.
As an example, consider the interface verification component for an Ethernet port. The
driver block would provide atomic operations for transmitting an Ethernet packet to a given
destination address and receiving packets. The monitor block checks that each packet
received at DUV ports is a valid Ethernet packet. For scoreboarding purposes, the monitor
also collects packets received and transmitted at this port. The coverage collector records
information on what size data payloads, how many packets, and to what source and destina-
tion addresses were sent and received. The sequencer block is used to send a group of Ether-
net packets that collectively correspond to a TCP/IP data transfer.
The logical view of a DUV port, as presented by an interface verification component to
higher layer verification components, consists of a set of software-oriented constructs. These
constructs allow higher layer verification components to access status collected by the moni-
tor, and to interact with the driver and sequencer in the interface verification component.
This logical view is formed by combining the views provided by the monitor, driver, and
sequencer blocks.
Details of the logical view and blocks in an interface verification component are dis-
cussed in the following subsections.
The sequencer in an interface verification component provides a set of default and com-
monly used sequences. These sequences are used when the interface verification component
is used to generate random traffic with only a local view of the DUV port it is attached to. An
interface verification component should provide a view of its predefined sequences, how
these sequences can be customized, and how new sequences can be added to the sequencer.
During module and system level verification, the sequencer in the interface verification com-
ponent interacts with the higher layer verification components to drive the intended traffic
into the DUV port. This interaction may be driven from higher layers or initiated by the local
sequencer. The logical view of a sequencer should provide a view of how this interaction can
take place and be customized for any specific application.
The driver in an interface verification environment is used mostly by the sequencer in
that interface verification component. In some cases, however, it may be necessary to drive
or configure this driver from outside the verification component. Additionally, when creating
new sequences for the sequencer in the interface verification component, detailed knowledge
of driver control and the configuration mechanism is required. An interface verification com-
ponent should, therefore, provide a detailed view of the mechanisms used for interacting
with its driver.
Monitors in higher layer verification components rely on information provided by the
monitors in lower layer verification components. As such, developing monitors in the higher
layer verification components requires detailed knowledge of the status information made
available by lower layer monitors. An interface verification component should, therefore,
provide a detailed view of conditions detected by its monitor during the simulation process
and how this information can be accessed. This view is also necessary when adding
sequences to the sequencer in an interface verification component.
3.2.2 Driver
A design communicates with the outside environment through its ports. The exact form of
this communication is decided by the protocol implemented by that port. A protocol usually
provides a set of high level atomic operations (e.g., send a packet, send a datagram) that in
reality are carried out through many cycles of complex handshaking procedures across multi-
ple design pins. An important observation is that the detailed steps of how each protocol
operation is carried out is not really relevant in any part of the verification environment
except that port. As such, it is possible to localize all low-level protocol related activities to
the block that interacts with that port. A driver implements the functionality that hides this
detail and provides an abstract view of the protocol at the logical port of the interface verifi-
cation component.
A driver used for verification purposes is more complex than a driver that just supports
the operations defined in a protocol. The reason is that a verification driver should not only
be able to produce and understand correct protocol behavior, but it should also be able to pro-
duce invalid traffic that the design is expected to detect. and also identify invalid behavior
that the design might produce at its port. The exact fonn and extent of invalid conditions that
should be detected and generated by a verification driver is detined by the verification plan.
Such extra requirements. however. can usually be deriwd by analyzing what protocol errors
can occur (e.g., CRC errors).
50 Verification Environment Architecture
useful to control or know about (e.g., idle cycles before an Ethernet packet).
• Define driver operations for injecting errors and driver status fields for recording
observed error conditions.
o Choose a parameterized operation over multiple operations that essentially create the
defined by the contents of the packet that must be sent or the contents of the packet that was
received. A useful addition to this interface is to also specify the number of cycles to wait
before the packet is sent. In the receive direction, it is useful to include in the received packet
the number of idle cycles that were detected before the preamble of the received packet was
detected.
The logical port of a driver should provide the ability to inject errors at the physical port
and provide information about erroneous conditions caused by the DUV and detected by the
driver. As an example, the Ethernet protocol requires that packet preamble to be at least 56
bits of alternating zeros and ones. In defining the operation to send a packet, it is useful to
define the length of the preamble as a parameter to the logical port so that this size can be
randomized to values smaller, the same, or larger than 56 bits. In the receive direction, a col-
lected Ethernet packet should also include a field that stores the number of preamble bits that
were detected before the collected packet was started. The inclusion of such features depends
on a careful analysis of what can go wrong at the physical port that is not considered in the
logical description of the protocol.
The features provided by the driver should be rich and comprehensive so that direct
interaction with DUV physical port is not required. This set of features should, however, be
well organized and grouped. As such, it is a good practice to define parameters at the logical
port in cases where groupings of different features as a parameter makes sense.
detected independently by the monitor block, which is always present in passive or active
modes. An example ofa user interface is shown in figure 3.3.
Logical View
User interface:
write_blkJmm(addr, data, cycles_wait_before_send)
.. ..
Physical
write_blk_posted(addr, data, cycles_wait_before_send) "- Port
C1)
read_block(addr)
I,;onllguratlon Interrace:
.f:
0
set_burst_words(size)
get_bursCwordsO
3.2.3 Sequencer
A sequencer is an intelligent block that generates random transaction sequences. Such
sequences should be created dynamically during simulation runtime and according to ran-
domizable parameters so that scenarios can be generated randomly during simulation run-
time. The detailed architecture of the sequencer and its use in generating scenarios is
described in detail in chapter 14.
The sequencer in an interface verification component generates a sequence of transac-
tions that collectively form a scenario described for the protocol supported by the design
port. The important observation about this sequencer is that, conceptually, it is not aware of
conditions at other DUV ports. Given that a meaningful scenario is usually defined in terms
of traffic through multiple DUV ports, this means that the types of scenarios generated by the
sequencer inside an interface verification block is limited in extent to scenarios that require
only local visibility. The task of creating scenarios that require interaction with multiple
DUV ports is assigned to module/system verification components that interact with multiple
interface verification components. The types of traffic generated by the sequencer in an inter-
face verification block is, therefore, limited to the following:
• Atomic operations defined at the abstraction level of the DUV port protocol
• A composite operation at the DUV port
• Responses to requests received from the design that do not require global view
Activity at a design port is usually abstracted into atomic operations that reflect the log-
ical view of the protocol implemented by that interface. For example, for a memory inter-
face, atomic operations include memory read/write operations and at an Ethernet port,
atomic operations consist of Ethernet packet receive and transmit operations. Composite
operations at a port refer to a sequence of atomic operations that collectively form an atomic
operation at a higher level of abstraction. For example, individual memory write operations
can be combined to write an arbitrary size block of data to memory, or multiple Ethernet
packets can be sent in order to transfer a large block of data. The composite operations gen-
erated by the sequencer in an interface \"erification component usually correspond to how
that interface is used at the system le\"el and. therefore, the composite operations correspond
to atomic operations at the system level.
Interface Verification Components 53
r-Lr4 Scoreboard
h r-
Monitor1 Monitor2
!-Er- r- ~
Driver1 1.0
I""
..
~
DUV
-- Driver2
Syntax checks verify that all signals at DUV port have valid values. Syntax checks are
easy to implement and define since they do not depend on timing information (e.g., burst size
should always be a valid burst size value). Identifying and implementing timing checks is
more involved, since such properties are defined across multiple signals and multiple time
cycles.
Similar to the sequencer, the monitor in an interface verification component tracks only
activities that can be extracted from the DUV port it monitors. As such, the monitor in an
interface verification component tracks protocol compliance at the port and collects atomic
operations at the abstraction level of the design port (e.g., memory write transaction for a
memory interface).
The coverage collector in an interface verification component mirrors closely the condi-
tions that the monitor tracks and the atomic transactions that the monitor extracts from the
DUV port. In general, coverage should be collected on any check that the monitor makes,
since such checks must have been a part of the verification plan or may be needed to be
implemented in the monitor. In addition, the coverage collector should track the types of
transactions collected by the monitor from the DUV port and all attributes of such transac-
tions. For example, for an Ethernet interface, the coverage collector should track Ethernet
packets collected from the interface, and for each packet it should track the source address,
destination address, packet type, data size, and whether the packet was corrupt or not.
Coverage should be collected for traffic initiated by both the DUV and the interface ver-
ification component. Traffic initiated by the DUV is used for identifying what design traffic
was checked by the monitor, and traffic initiated by the interface verification component is
used to identify what scenarios or sub-scenarios were generated at that interface.
Software verification components are necessary when yerifying system level designs that
include the software stack. Figure 3.5 shows an example of such a system. In this yiew. a
software verification component provides an abstraction of the software interface to the \"eri-
Module/System Verification Components 55
fication environment so that the module/system verification components can treat the soft-
ware port in the same manner as that of a physical port.
similar architecture in that they interact with verification components at higher and lower
layers to create the desired scenarios and track DUV behavior. These components differ only
in that they operate on different size DUVs. In this section, the term "system verification
component" is used generically to refer to all such verification components.
The focus of a system verification component is generally on end-to-end behavior ofthe
DUV and not on individual blocks composing that DUV. This approach is based on the
implicit assumption that in a layered verification environment, smaller blocks have already
been verified to some degree and the focus of verification is on the current DUV (e.g., when
verifying a module, blocks are assumed to be verified). This assumption holds throughout
the design life cycle, as modules are created by combining blocks, and later when systems
are created by combining modules. This approach allows verification environment complex-
ity to be managed more efficiently where each new layer can deal with only the new features
added in the latest design integration step. In spite of this end-to-end focus of a system verifi-
cation component, this approach also provides further verification of features in smaller
blocks, since system-wide scenarios will ultimately result in activation of individual blocks
and modules.
Interface Interface
Verification DUV f-oIIIf-~ Verification
Component Component
• Sequencer
• Verification environment (VE) monitor and coverage collector
• DUV monitor and coverage collector
The VE monitor interacts with monitors in lower layer verification components (e.g.,
system monitors track monitors in interface and module verification components) to provide
information about the current state of DUV as observed by lower layer verification compo-
nents. The DUV monitor tracks DUV internal signals since these signals cannot be tracked
through monitors attached to DUV ports. The combination of VE monitor and DUV monitor
allows a gray-box verification approach (see section 1.1.2.1) to be leveraged in this architec-
ture. Information provided by these monitors is further used by the sequencer to create
end-to-end scenarios.
These blocks are discussed in the following subsections.
3.4.1 Sequencer
Generally, the sequencer in a system verification component is responsible for the following
operations:
• Initializing DUV and verification environment
• Configuring DUV and verification environment
• DUV end-to-end scenario generation
These activities are discussed in the following subsections.
3.4.1.1 Initialization
Initialization refers to setting DUV and verification environment conditions that must be set
before simulation is started. Initialization settings include the following:
• DUV pins that must be set before a design can be activated
• DUV register settings that cannot be changed during device operation
• Memory pre-loading
• Verification component settings that must be made before simulation is started
The motivation for placing initialization steps in the sequencer is that such initialization
can be changed as part of scenarios and even randomized if necessary. If design and environ-
ment initialization is not done in the sequencer, then it becomes a part of the environment.
This means that running a scenario requiring a different set of initialization settings will
require not only a change in the scenario but the environment as well.
As mentioned previously, a sequencer generates scenarios through dynamic generation
of sequences during simulation runtime. This means that a scenario may be created only after
design initialization is completed at time zero and, therefore, not all initialization actions can
be performed within the sequencer. The goal. hmvever. should be to do as much of the initial-
ization in the sequencer to a\·oid having to change the environment for scenarios requiring
different initialization settings.
58 Verification Environment Architecture
3.4.1.2 Configuration
Verification environment configuration refers to settings in DUV and verification environ-
ment that can be changed during simulation runtime or as specified by design specification.
As such, resetting a design can be considered a configuration setting, since a device can be
reset at any time during the simulation runtime. In fact, resetting a design in mid-operation is
a required scenario that must be included in any verification plan.
The important consideration in building the configuration infrastructure is that often,
the state of the verification environment at the time of such reconfiguration becomes invalid
since normal device operation is interrupted. For example, upon reset of a design, all score-
board contents, monitor observed condition flags, etc. will have to be initialized to match the
design state right after reset.
Configuration may be needed as part o{verification scenario execution (e.g., resetting
the device, or changing device register settings), or used to simulate multiple scenarios
requiring different configurations during the same simulation run. In either case, special care
should be taken to update the state of the environment if a configuration step affects the state
of DUV such that such changes are not automatically detected by the environment.
3.4.2 VE Monitor
The VE monitor for a system verification component serves these purposes:
• Providing a pointer to lower layer monitors (e.g., interface monitors)
• Verifying DUV end-to-end behavior
• Scoreboarding
Module/System Verification Components 59
Blocks in a system verification component must have access to events, status, and
objects collected by monitors in alllower layer verification components. This information is
necessary for generating scenarios that depend on careful timing of activities generated and
tracked across the verification environment. As such, the VE monitor in a system verification
component should act as a central place for providing a link to monitors in lower layer verifi-
cation components.
End-to-end behaviors add semantics beyond those defined at DUV ports. These seman-
tics relate to behavior defined for the current level of design integration. Such behaviors can-
not be checked at the interface level or at levels of abstraction below the one for the current
DUV. As such, VE monitor for a system verification component should include checks for
all such new end-to-end behaviors.
The VE monitor in a system verification component is also the ideal place for score-
boarding, since this monitor has access to all lower layer verification components where data
objects are collected from the environment. Including the scoreboard in the monitor also
allows for easy reuse of the scoreboard when the DUV verified at this stage is integrated into
a larger system. More details on scoreboarding is provided in section 3.4.4.
3.4.4 Scoreboarding
Scoreboarding is used to check for the following potential problems as data objects are pro-
duced, consumed, and moved across a design:
• Data values being different than expected
• A packet being received when one is not expected
• A packet not received when one is expected
The monitors in interface verification components verify that all signal timings at DUV
ports match the specification. As such, data checking does not need to consider signal timing
and is done at the transaction level where· all signal and transfer timing information has
already been removed from the collected data object or transaction. For example, when
scoreboarding an Ethernet packet injected and expected to be received at a switch port, the
exact signal timings of the packet traveling through design interfaces is assumed to be cor-
rect and only packet content as abstracted by the Ethernet format is scoreboarded.
Scoreboarding requires a reference model so that the expected outcome of an activity
can be predicted. The general architecture of scoreboarding is shown in figure 3.7. In this
view, the monitor on the input side collects a data object moving into the DUV. It then com-
putes the expected output of the DUV based on that input and inserts the expected result in
the scoreboard. The output monitor then checks every data object collected at the output
against those already in the scoreboard.
Scoreboard
DUV
0000
Data Objects Going In •
Data Objects Coming Out
A scoreboard is essentially composed of a list of expected data objects where every data
object collected at the DUV output is compared against the next expected data object. The
matching of an actual object against an expected data object depends on a number of design
dependent factors. A generic scoreboard implementation should, however, provide the fol-
lowing functionality:
• Support ordered checks where order of expected and actual data objects should be the
same
• Support unordered checks where the collected data object may arrive OLlt of order
• Allow for initial mismatches to be ignored since many designs may take some cycles
to synchronize
• Perform end-of-simulation checks to make sure the scoreboard is empty and all
expected data objects have been matched against an actual collected object
Verification Component Reuse 61
PART 2
All aboLlt SystemVeriiog
64
CHAPTER 4 System Verilog as a
Programming
Language
SystemVeriiog features and constructs have been defined because of the specific require-
ments of the latest design and verification methodologies. As such, learning SystemVerilog
syntax and programming without intimate knowledge of the methodology behind its incep-
tion would be as unrewarding as flying a supersonic jet with the piloting skills of a glider
pilot. Familiarity with the verification methodology and architecture described in part one of
this book is a first step in understanding the motivation behind the introduction of new Sys-
temVeri log constructs. With this methodology in mind, features of SystemVerilog discussed
henceforth will present themselves not as mundane details of a programming language, but
as facilitating constructs for the implementation of a modem verification environment.
This chapter discusses SystemVerilog syntax, flow, and structure. In doing so, this chap-
ter provides an overview on how to write simple programs in SystemVerilog. Chapter 5
describes SystemVeriiog features that are specially introduced or used to better facilitate
functional verification.
A block's internal state is represented by data values that the block holds; its functional-
ity is represented by threads of execution that manipulate these internal data values; its
dependence on other blocks is represented by its interconnection infonnation; and its order
of execution, with respect to other blocks needed for deciding the order of execution for its
threads, is represented by its synchronization infonnation.
Hardware description languages are essentially created to implement this generalized
view of a digital system. For example, in Verilog, threads of exception include always and
initial blocks, data interconnection (i.e., dependence) between processes are either explicitly
defined (i.e., using sensitivity lists) or implicitly derived by deciding what variables from
outside a process are used in its description, and synchronization information is either
derived implicitly (by propagating value changes as events) or explicitly by using wait state-
ments based on events or absolute time values.
Thread 1
Illi~1
......~ ==== ~
Prog ram acting on Data
Thread2
Interconnection
I
-=== II I DatalI
Data21
The power of a modeling language to represent complex systems using this generalized
model and the productivity it affords is directly proportional to the following elements:
• Ability to represent complex and user defined data types
• Ability to represent interconnections at different levels of abstraction
• Expressiveness of the procedural language used to model the program executed in
each thread and the interaction between threads
• Ease of building a hierarchical model
Even though Verilog is a good language for modeling a digital system consisting of
wires. registers, and modules that are connected through ports, it quickly runs out of steam
when more abstract models must be built. Specifically, Verilog lacks the ability to specify
abstract data types. interconnection between modules represented as anything but bits and
bytes. and higher level software programming techniques that facilitate better modeling of
each thread of execution. and the synchronization between concurrent threads.
Structure of a SystemVerilog Program 67
Even though SystemVerilog adds many new features to Veri log, the structure of a System-
Verilog program remains essentially the same as that of a Verilog program. A SystemVerilog
program hierarchy consists of three fundamental entities:
• Module blocks
• Program blocks
• Procedural blocks (i.e., processes)
• Data objects
Module Blocks are used to model structural hierarchy of the design. In module-based
verification environments, module blocks are also used to model the hierarchy of the verifi-
cation environment (section 12.1). Module blocks can contain module blocks, procedural
blocks, and data objects. Program blocks are used to define a clear boundary between the
verification environment and the DUV (section 5.3). Program blocks can contain only type
and variable declarations and one or more initial blocks. Procedural blocks can contain pro-
cedural blocks and data objects. Data Objects can contain only data objectsl.
The following program shows examples of these relationships:
I. Classes are special types of objects that combine related data values and procedural blocks. Classes will
be discussed in section 4.7.
68 SystemVerilog as a Programming Language
The new user defined type twovaU (line 1) is an example of a data obj ect containing (or
being composed of) other data objects. Modu Ie my top includes data objects (e.g., tv of type
twovaU), modules (e.g., leaU, an instance of module leaf), and procedural code in the form
of an initial block. Figure 4.2 shows a pictorial view of block relationships in this program
sample.
~
I\ I l ~f'.,
I \m".p'y('~fmod"'el
\neS19-25)
Ddecval
• Constants
• Variables
• Nets
• Attributes
Literals refer to explicit representation of a value in program code such as integer literal
1 or string literal "Arbiter". Parameters are values that can be individual1y assigned for differ-
ent instances of a module and as such, allow for customization of different instances when
building the module hierarchy. Parameters are taken into consideration during program elab-
oration (creation of the module hierarchy) before simulation starts, and as such cannot be
changed during simulation runtime. Constants are represented by named objects whose val-
ues don't change throughout the simulation runtime. Variables are named data objects whose
values can change during simulation runtime. Nets are similar to variables, but more closely
model a physical wire, since the new value of a net data object is decided by resolving the
newly assigned value with the previous value of the net (this is in contrast with variables
where last assignment always prevails). Attributes are used to attach properties to objects
(i.e., modules, instances, wires, etc.) for later retrieval.
A data object can be fully described by the following properties:
• Lifetime: When it is created and when it is destroyed
• Update mechanism: How its value is changed
• Type: The type of data it holds
These aspects are discussed in the following subsections.
2. A scope defines an enclosing context within which a data object is \'isible and can be llsed in expres-
sions.
Data Types and Objects In SystemVerllog 71
program, module, function, or task as either automatic or static by appending the appropriate
keyword to that procedural block (see section 4.4).
Classes and object-oriented programming is discussed in section 4.7. Other topics are
discussed in the following subsections.
14.55
Decimal
0.45
Real I
l.4e13 (exponent must be an integer)
Scientific
I
0.55e·4 (exponent must be an integer)
"this is a text"
"In" (a new line character)
String "It" (a tab character)
Quoted
"II" (a I character)
"\" (a quote character)
"Iddd" (an octal digit for a character d<8)
I
Table 4.1: System Veri log Constant Types and Syntax
Table 4.2 shows a summary of built-in data types available in System Veri log. The fol-
lowing observations apply to data types shown in this table:
• Types reg and logic are exactly the same. Type logic is introduced in System Veri log in
order to better reflect the usage of this data type,
Data Types and Objects In SystemVeriiog 73
Signed
Form Type Values Packed Size New In SV
(default)
• Types bit, logic, and reg can have packed sizes of any dimension (see section
4.3.3.3.1 ).
• Types byte, shortint, int, longint, and integer are signed by default but types bit,
logic, and reg are unsigned by default. To change from the default, keyword signed
or unsigned must be used when declaring variables of this type.
• Some built-in data types can be described by using other data types. For example,
byte is the same as "bit [7:0]" and integer is the same as "logic signed [31:0]".
The following program shows example declarations for these data types.
the preceding named constant, with 0 assumed if it is the first named constant in the
list.
• Values x and z can be assigned to named constants if data type is four-valued (e.g.,
integer). In this case, the value for the named constant immediately following this
named constant must be explicitly specified.
SystemVerilog provides methods for iterating over the named constants of an enumer-
ated type. These methods are:
function enum firstO;
function enum lastO;
function enum next(int unsigned N = 1);
function enum prev(int unsigned N = 1);
function int numO;
function string nameO;
Function firstO returns the first named constant in the enumerated type declaration,
LastO returns the last named constant, nextO and previousO return the next and previous
named constants respectively, numO returns the number of named constants, nameO returns
a string containing the name of a named constant.
An example of using enumerated type and its functions is shown below.
As shown in the output, the constant value assigned to europe is 2, which is the next
value to that assigned to the previous named constant asia. Similarly, the constant value
assigned to america is 9.
4.3.3.3 Arrays
SystemVeri log provides the following types of array data types:
• Static arrays
• Dynamic arrays
• Associative arrays
• Queues
76 SystemVerilog as a Programming Language
Static arrays are multidimensional arrays whose size is explicitly specified when declar-
ing the array. Dynamic arrays have one or more dimensions with one dimension whose size
is undefined during declaration and is decided when creating the array. Associative arrays
allow access to array elements using a key that can have any data type. Queues are used to
model an ordered set of elements. These array types are described in the following subsec-
tions.
Dimensions specified before array-name are packed dimensions, while dimension spec-
ified after array-name are unpacked dimensions. The following guidelines apply to static
arrays:
• A single packed word is used to store all packed dimensions of an array.
• Array elements are accessed as shown below. As such, packed dimensions change
more frequently than unpacked dimensions.
array-name [URange1j ... [URangeM][PRange11 ... [PRangeN)
• An array dimension range can be specified as [index1:index21 or as [Number], where
[Number] is equivalent to [O:Number-11.
• If an array is declared as signed, then the memory word containing all of the packed
dimensions is assumed to be a signed value. A slice of a packed dimension is not a
signed value.
• Packed dimensions can only be specified for arrays of type bit, logic, reg, and wire.
Unpacked dimensions can be specified from any type.
Arrays can be accessed using the following mechanisms:
• Reading and writing the whole array
• Reading and writing a slice of array
• Reading and writing a variable slice of array (identified by another variable)
• Reading and writing an array element
• Equality operations on array or slice of array
It is important to note that dimension part-selects (slice spanning multiple elements of a
dimension) can be used only with packed dimensions. Examples of these access types are
shO\\'n below:
Data Types and Objects in SystemVeriiog 77
The above program shows examples of a whole array assignment (line 10), array slice
assignment (lines 13, 16), array variable slice assignment (line 19), and array compare oper-
ations (lines 11, 14, and 17). Dimension order for array element access is shown on lines 3
and 4.
SystemVerilog provides the following system defined methods for accessing array ele-
ments: $leftO, $rightO, $lowO, $highO, $incrementO, $sizeO, and $dimensionsO. See the
SystemVerilog reference manual for details of these functions.
SystemVeri log provides the following operators and methods for creating and interact-
ing with the dynamic arrays:
• Operator new
• Function sizeO
• Function deleteO
Operator new is used to allocate a dynamic array. Accessing a dynamic array before
memory is allocated results in a runtime error. Function sizeO returns the size of the
unpacked dimension of the array, which is the size specified when the array was allocated.
Function deleteO is used to reset the array size to o.
78 SystemVeriiog as a Programming Language
The rules of access for dynamic arrays (Le., reading and writing slices, etc.) are the
same as those for static arrays once the size of a dynamic array is known. Since this size is
only decided at program runtime, dynamic array access can lead to a runtime error message
while the same error types are checked at compile time for static arrays.
The following program code shows an example of dynamic arrays:
Operator new is used (lines 5, 7) to allocate dynamic arrays of size 10 and 20 for arr1 and
Operator new allows an array to be specified for initializing the allocated array. This is
arr2'
shown on line 7 where elements of arr1 (10 elements) are used to initialize the first 10 ele-
ments of arr2'
In this syntax, data_type is the type of each array element which can be any type allowed
for static and dynamic arrays, array-name is the name of the associative array being declared,
and key-type gives the data type for keys used to access array element.
Examples of associative array declaration, assignment, and usage is shown in the fol-
lowing program:
First, associative array aa5 is initialized with 5 members (line 6). Function numO is used
to print the number of elements in this array, yielding a value of 5 (line 7). Function deleteO
is used to delete array member stored at key 3 (line 8), after which, function numO returns a
value of 4 (line 9), and function call aa5.exlsls(3) returns a value ofo (line 10). FunctionflrstO
is used to place the first key of aa5 (value 0) into variable aakey (line 11). Function nextO is
used iteratively to print consecutive keys for array aa5 which prints a sequence of (1,2,4)
(line 12). Function lastO is used to assign the last key for array aa5 (value 4) into variable
aakay (line 13). Function prevO is then used iteratively to continuously assign the previous
key to the key stored in variable aakay back into variable aakay and print the result, producing
output (2,1,0). Function deleteO is used to delete all elements of array aa5 (line 15).
An associative arrays can only be assigned to another associative array having the same
key and member data types.
4.3.3.3.4 Queues
A q.ueue is an ordered collection of homogeneous (I.e., all having the same data type) ele-
ments. Queue elements can be accessed in constant time3. In addition, a queue can be grown
or shrunk. on both ends in constant time. A SystemVeri log queue construct is suitable for
implementing FIFO (first-in-first-out) and stack (first-in-last-out) data structures.
Queues are declared using the following syntax:
data_type queue_name[$];
data_type queue_name[$:max_sizel
In this syntax, data_type is the type of each queue element which can be any type
allowed for static and dynamic arrays, queue_name is the name of the queue being declared,
and max_size (if provided) gives the maximum number of elements allowed in the queue.
SystemVerilog defines a set of predefined methods for manipulating and accessing
queues. The function prototype for these methods are:
function int sizeO; /lreturns number of elements in queue
function void insert(int index, queue_type item);/I inserts elements at position index
>. In this contc:xt. constant time refers to the computation el1'on required to process an entry. For example,
acccssing an element in a linked-list may in the worst case re:quire that aillinked·list elements be tra-
\·crse:d. Meanwhile. an array ele:ment can be accesse:d using its index. Therefore:. accessing array elements
can be done in constant time but access time for linked-list elements is proponional to the size of the
linkc:d-list.
Data Types and Objects in SystemVeriiog 81
This example, shows the declaration of unbounded queue intq and bounded queue
Ilnbounded_,ntq (lines 2 ~). Pledefined queue functions ale then used to add, remove, and
access queue elements (lines 7-19). The contents of queue after each operation is shown as a
comment on each line. Note that the first element shown in the contents of the queue is at
index o. Accessing a queue element at a non-existing index (line 18) returns the default value
for the type of queue element. Using function deleteO to delete a member at a non-existing
index results in a runtime warning message. Using function insertO to insert a value at a neg-
ative index or beyond the last element of the queue also results in a runtime warning mes-
sage.
A packed struct that has any four-state member is stored in a four-state memory word.
Otherwise, the packed struct is stored in a two-state memory word.
Use of packed structs is shown in the following example:
This example shows a packed struct declaration that defines instruction IR. Each opcode
is four bits (line 3). The packed struct, therefore, is 16 bits. Members of a packed struct can
be accessed either through member names or their relative position in the packed memory
word storing the struct data. As shown, IR.opcode1 is assigned on line 10 and then IR[1S:12] is
checked for correct assignment on line 11. Variable IR is stored as 2-valued data. If, however,
one of struct members (lines 5-7) were 4-valued, then IR would be stored as 4-valued data.
4.3.4 Operators
SystemVerilog provides operators from both Veri log and C. SystemVerilog operators follow
the semantics of operators in Verilog. SystemVerilog operators are shown in table 4.3. The
following notes apply to operators listed in this table:
.
.~
AND
OR
I) & 12
[) 1'2
5
5
.;:
:c00
Unsigned right
Signed left
[»> 12
S«<I 1
!
= XOR I) A 12
11 _A [2
5
5
..:
eo
Signed right
Non-repeating
S»>I 1
Greater or equal
E) >E 2
E] >=E 2
E
=
.~
.';; Operators
+= -= *= /=
%=&=1="'=
«=»=
«<= »>=
I: An expression producing an unsigned integer J: A variable of integral data type
S: An expression producing a signed integer V: A variable of any type
R: An expression producing a real value C: An unsigned integer constant
E: One ofi,S,R
I I
Table 4.3: System Veri log Operators
ments as don't care values. These operators only return a value ofo or 1.
• Notes: If the arguments to the bitwise operators are not the same size, then the shorter
argument is zero-filled in the most significant bit positions.
• Note6: Reduction operators return a 0,1, or x depending on the individual bit values
of its argument. Value z is treated the same as x when combining individual bit val-
ues.
• Note7: If the condition expression for a conditional select operator evaluates to an x,
then the result is computed by bitwise combination of both its arguments, with the
exception that a value of 0 is returned if any of its arguments have a real data type.
The bitwise combination returns an x for any bit position if the values of arguments
in that bit position are different or a z.
Lines 7 and 8 show examples of non-blocking and blocking assignments. These types
of assignments have been covered extensively in introductory books on Verilog. SystemVer-
ilog introduces increment and decrement operators which assign the current value plus one
and current value minus one onto itself respectively.
Procedural continuous assignments include assign, deassign of variables and force and
release of variable and net data types. The procedural continuous assignment has been
placed on the deprecated list in SystemVerilog and is expected to be removed from the lan-
guage in future releases.
4.4.2.1 Functions
Functions have a list of arguments and return a value. Functions execute in zero simulation
time, which means that time control statements cannot be used inside a function.
Function arguments can be one of:
• input
• output
• inout
• ref
The semantics of each argument type is as follows: For input argument types. the value
of the argument is copied from the caller's context upon entering the function. The value of
an output argument type is copied upon completion of the function from inside of the func-
tion to the actual variable specified in the caller's context. The \"Blue of an illout argument
86 SystemVeriiog as a Programming Language
type is copied twice once upon entering the function and once upon leaving. No value is cop-
ied when using a ref argument type, and both the function and its caller use a reference to
data object passed in the argument. Therefore, any change that a function makes to the value
of a ref type argument is immediately visible outside the function. Both inout and ref quali-
fiers result in changes to the argument value made inside the function to become visible in
the caller's context, with the difference that changes made inside a function to a ref type
argument are immediately visible in the caller's context but changes made inside a function
to an inout type argument become visible only after the function returns.
A function can be declared as automatic or static. All variables local to an automatic
function are assumed to be automatic variables unless explicitly changed for each variable in
its declaration. All variables inside a static function are assumed to be static variables unless
explicitly changed foreach variable in its declaration. The value of static variables are main-
tained across calls to the same function. SystemVeri log allows individual variables declared
inside a static function to be marked as automatic. It also allows individual variables
declared inside an automatic function to be marked as static variables.
The following program shows an example of a function declaration and its usage.
39 end
40 ,
endmodule
~-------------------
4.4.2.2 Tasks
Tasks and functions are very similar, except in the following ways:
• A task does not return a value
• A task may have time consuming statements
The following program sample highlights these differences.
In this example, task twovaUncrement (lines 2-12) does not have a return value and
assignments performed inside the task have an associated delay value (lines 9, 10). Other
than return value behaviors, tasks have similar properties as those defined for functions.
Priority and unique forms of if-else statements are closely related to their hardware
implementation. The implementation of a priority if-else statement contains special priority
decoding hardware, while the hardware implementation of a unique if-else statement does
not require such special hardware, since at most one of the condition Predicates are true.
Examples of these forms of if-else statements are shown below:
1 int a, b, c;
2
3 if (a == 1) begin
4 $display("a is one");
5 $display("and b is", b);
6 end
7 else if (a ==
2) $display("a is two");
8 else $display("a is neither 1 nor 2");
9
10 priority if (a ==
1) $display("a is 1");
11 else $display("a is not 1");
12
13 unique if (a ==
1) $display("a is 1");
14 else $display("a is not 1");
1 int a, b, c, d;
2 case (a**2) 1/ can be written as "priority case" or "unique case"
3 c,(b+33): $display("a**2 is the same as either c or b+33");
4 d+5: $display("a··2 is the same as d+5");
5 9: $display("a**2 is 9");
6 default: $display("a is none of the above");
7 endcase
Case expression for the above example is expression (a**2). Case item expressions are
(c,(b+33»), (d+S), and (9). Keyword default is used to specify a statement for conditions where
none of the case item expressions match against the case expression.
Each case item expression is matched bitwise against the case expression. This means
that a case item expression does not match if any of its bit values faiI4-\'alued (i.e .. 0, 1. x. :)
90 SystemVerilog as a Programming Language
comparison with the corresponding bit in the case expression. System Verilog also provides
special case statements easex and easez to relax the matching requirements. In a easez state-
ment, value z is assumed to be a don't care and is matched against any bit value. In a easex
statement, both x and z are assumed to be don't cares and match against any bit value.
Note that for each pass through the case statement, the probability for each case item
may be different depending on the current values for the variables used for evaluating case
labels. As such, constants should be used as labels if a static probability distribution is
needed.
then this statement has no effect. If the block is a loop body, the disable statement acts the
same as a continue statement.
Examples of loop constructs are shown in the following example:
Note that a delay value is included before each display statement, so that all threads
started by the fork statement become idle until their next execution time, so that other threads
have the opportunity to execute.
The above example shows a module declaration having ports of type input, output, and
inout (line 3). This module uses a composite data structure tv as an input port (line 3).
94 SystemVerllog as a Programming Language
• A ref port can only be connected to an equivalent variable data type both inside the
module and when the module is instantiated. A ref port cannot be left open. Access to
a ref port is equivalent to a hierarchical reference.
Connection rules for ports having a net data type are as follows:
• A input port can be driven on the outside by any expression of compatible type where
the expression can be composed of variables or nets. An input port can also be
assigned inside the module. If an input port is left unconnected, then its value is set to
Z (tristated).
• An output port can be driven from inside the module by multiple continuous assign-
ments. An output port can be connected on the outside to either a variable or a net
data of compatible type. Ifthe output port is connected On the outside to a variable
data type, then no other procedural or continuous assignments are allowed for that
variable. If it is connected to a net, then other continuous assignments are allowed for
that net, but procedural assignments are not allowed.
• An inout port can only be driven from inside the module by a net data type. An inout
port of an instance can only be connected to an actual net data type.
cpu core t
I wired or ~
mem core
wen
~
....
ren
mrdll
-..
addr
~
...... wdata
rdata
clk
......
~
t
Figure 4.3 Interface between a Memory and a CPU
~ ~ I
t clk
Figure 4.4 Interface between a Memory and a CPU
• Variables
• Tasks and functions
• Processes (always, initial blocks)
• Continuous assignments
A port list can be defined for an interface block. An interface can be instantiated inside
other interfaces (leading to hierarchical interfaces) and other modules. Modules cannot be
instantiated inside interfaces.
Interfaces can contain tasks and functions thereby, allowing an interface to be modeled
at an abstract level. This also allows for protocol checking routines to be included in the
interface descriptions.
Both the declaration and instances of interfaces can be customized for different con-
texts. Parameters allow different instances ofthe same interface declaration to be customized
When an instance is being created. The declaration of an interface can be customized by
declaring only the function or task prototypes (the list of arguments) in an interface and
defining the body in other modules. A module header can use a genetic interface when the
interface contents is not yet defined, allowing module implementations to proceed without
prior exact knowledge of the interface contents or composition of its signals.
The following SystemVeri log program shows the implementation of an interface block
in SystemVerilog:
34
35 assign mb.status = 0;
36
37 always @(negedge mb.ren) mb.reply-read(mem[mb.addr], 100);
38 end module
39
40 module cpu_core (membus.master mb);
41
42 assign mb.status = 0;
43
44 initial begin
45 logic [7:0] read_data;
46 mb.read_memory (7'b00010000, read_data);
47 $display("Read Result", $time, read_data);
48 end
49 endmodule
50
51 module top;
52 wor status;
53 logic clk = 0;
54 membus mb(clk, status);
55
56 mem_core mem (.mb(mb.slave));
57 cpu_core cpu (.mb(mb.slave));
58
59 initial for (int i = 0; i <= 255; i++) #1 clk =!clk;
60 end module
This example defines interface block membus which has two ports corresponding to sig-
nals elk and status (line 1). Interface block membus includes a list of data objects that corre-
sponds to signals passing through the interface and connecting different modules (lines 2-7).
This interface also includes procedural descriptions, allowing each module attached to it to
drive the interface signals through procedure calls (lines 9-26). In addition, membus includes
modport declarations specifying the port directions for different types of modules that can be
connected by this interface (lines 28, 29). These modport types are then used when declaring
the port list of modules that will be connected by this interface (lines 32, 40) and also when
connecting instances of modules and interfaces (lines 56, 57). Note that modules connecting
to this interface can use the tasks provided inside the interface to interact with the interface as
shown in this example (lines 37, 46). These modules can also directly drive the signals inside
the interface (lines 35, 42). This feature of the interface allows it to be driven both with struc-
tural and procedural modules.
4.5.3 Parameters
Often, it is necessary to customize different instances of the same object in a program's
object hierarchy (i.e., memory size inside a memory module). SystemVerilog provides the
following keywords for this purpose:
• parameter
• iocaiparam
• specparam
Parameters are used to perform customizations for the following System Verilog blocks
and constructs:
100 SystemVeriiog as a ProgrammIng Language
• Module
• Interface
• Program
• Class
Parameters have the following properties:
Each parameter must be assigned a default value when declared.
Parameters can be defined to have any data type. Parameters defined with no data
type default to type logic of arbitrary size.
Parameters are set during elaboration (creating the instance hierarchy) and remain the
same throughout the program execution.
Data types can also be defined as parameters so for example data objects in different
instances of the same module declaration can have different data types.
A parameter of integer data type can be assigned a value of"$".
Start and end time of static processes are determined at program start time. Creation of
dynamic processes and their termination conditions can be decided during program runtime.
Static and d~llamic processes are described in the fOllowing subsections.
Object-Oriented Programming and Classes 101
tion or modification, thereby providing predictable ways for the way data inside an
object can be modified
• Ability to handle objects of different types (i.e., different data structures) uniformly
(see section 4.7.4 on polymorphism).
• Ability to better deal with complex systems by modeling system components as sep-
arate objects.
SystemVerilog provides the class construct for building objects based on the object-ori-
ented programming paradigm. The main flow in object-oriented programming is as follows:
1. Create a class containing properties (i.e., data objects) and methods (functions and
tasks).
2. Optionally, create an extension of a previously defined class. This step will either
define new properties and methods, or redefine the previous definition of the class
that it extends.
3. Declare pointers that hold the address for instances of classes defined in the previous
step.
4. Create instances whose address are stored in pointers declared in step 3.
Figure 4.5 shows a graphical view of this process. In this example, first a base class is
defined. Classes Ch C2, and C3 are defined by extending class Co, and class C4 is defined by
extending class C3' Variables i1-i 10 are pointers to classes of types C1-C4, where each pointer
points at a memory location containing an object instance of that type. Note that a pointer
can have value null.
In this example, class packet is created by declaring a class and its contents (lines 2-9).
Class packet is then extended to declare a new class bigger_packet which includes a new data
item (data2) and to redefine the definition for data item flag and method my-printO (lines
11-18). In this example packet is the base class or super-class and bigger_packet is the sub-
class or derived class.
Pointers P1 and P2 to class packet are declared on line 20. Pointers b h b2 , and b3 to class
bigger_packet are declared on line 21.
The SystemVerilog new construct is used to create an instance of a class and to assign it
to a pointer. A new instance of class packet is assigned to P1 on line 24. Pointer P2 is pointed
to the same object that P1 is pointing to on line 26. Note that a new instance is not created, but
rather both P1 and P2 point at the same instance.
104 SystemVeriiog as a Programming Language
Line 29 creates a new instance of bigger_packet class and sets b1 to point to that instance.
Pointer b3 is set to point at the same instance as the one pointed to by b1' A new instance of
blgger.J)acket class is created on line 33 and contents of b1 are copied into this new instance.
Note that this is a shallow copy where the class objects pointed to by members of instance b 1
are not duplicated and only their pointer is copied.
The memory used for an instance of a class object is freed when no pointers point at that
object. This memory reclamation is handled automatically through garbage collection mech-
anisms built into the program execution engine and as such, memory leak issues are not a
concern when programming in SystemVerilog. In this example, the object created on line 29
is freed after executing line 36 since both pointers pointing to it have been set to a different
value. Similarly, the memory associated with the object created on line 33 is freed after exe-
cuting line 33 where the only pointer pointing to it is set to a different value.
18 #4 p.my-print($time); IIprints 5 1 10
19 #8 p.my-print($time); IIprints 13 5 10
20 end
21 endmodule
In this example, property data1 of class packet is defined to be static. This means that
this property can be accessed even without creating an instance and through a null pointer
(line 16). In addition, method my_printO of class packet is declared to be a static method (first
static keyword on line 6). Static methods can access only static properties of a class, as is the
case for mLprintO method accessing data 1 property of class packet. Note that variables inside
method my_printO are also declared to have static lifetime by default, as indicated by the sec-
ond use of keyword static on line 6.
A method prNl)type r~fers to its return type. name. list of arguments and each argument type.
106 SystemVeriiog as a Programming Language
These special features for managing the definition of a class hierarchy are described in
the following subsections.
12 end function
13 end class: packet
14
15 packet pp;
16 initial begin
17 pp = new;
18 $display(pp.size(4));
19 end
20 endmodule
In this example, base class base_packet is defined as a virtual class. It includes data
member data1 and method sizeO. Class packet is derived from class base_packet and adds a
new data member crc and completely redefines the definition for methods sizeO. A pointer to
class packet is declared on line 15 and an instance is created on line 17. An attempt to use
base_packet instead of packet on line 15 would lead to a compilation error.
In this example. class base_packet contains a generic declaration of function sendO. This
function is not marked as virtual and. therefore. the redefinitions of this function in any
6. Method prototl'pl! oc method siglll.Jtw·" is gi\'~n hy the method name. return type if any, and argument
name(sl. directionls). and typeis),
108 SystemVeriiog as a Programming Language
derived class may have a different function prototype. Function sendO is redefined in derived
class derived_packet with a different function prototype and as a virtual function. The redefi-
nition of function sendO in any class derived either directly or indirectly from class
derived_class must have the same function prototype as the one in class derived_class. This is
shown in the redefinitions of this function in derived classes derived_packet1 and
derlved_packet2. Using keyword virtual for function redefinition in class derived_packet2 (line
15) is redundant since all function redefinitions in all derived classes of derived_packet must
use its prototype definition of function sendO.
In this example, function sizeO is marked as an extern function in class packet (line 5).
The body of this function is defined later outside the class block (lines 8-10). Class scope
resolution operator "::" is used to identify the class containing the original method prototype
declaration.
In this example, class pkt shows a simple example of using parameters to define a
packet whose fields can be customized to be of different type and size. This class defines
parameters T and s, each with a default value. Specialization "pkt #(bit,12)" (line 12) defines a
class that contains an bit array of size 12. Specialization "pkt #(int)", using the default value
for parameter 5, defines a class that contains an int array of size 10.
Parameterized classes can be extended. This feature is shown in the definition of class
double_pkt (lines 8-10). This class defines parameters T 10 T 2, and s. It uses parameters T 1 and
s to specialize the base packet it is extending, and it uses parameters T2 and 5 to define field
pkt2 whose type is class pkt specialized with parameters T2 and s.
In this program, pointer B_ptr is a two element unpacked array of pointers to class
objects of type B. Because of polymorphism, however, B_ptr can be used to point to class
objects of type 01 and 02 since these classes are derived from B (lines 21, 22). When access-
ing these objects, B_Ptr[O] will behave as if it was a pointer to 02, and B_ptr [1] will behave as
ifit was a pointer to 01' As shown in this example, this type of polymorphism allows an array
of pointers to hold objects of diverse types.
18 endfunction
19
20 function byte geCcrcO;
21 compute_partiaLcrcO;
22 compute_crcO;
23 =
get_crc crc + getJ)artial_crcO;
24 endfunction
25 endclass: packet
26
27 packet p;
28
29 initial begin
30 =
p new;
31 $display(p.get_crc(»;
32 end
33 end module
In this example, property partial_crc is only visible and editable inside the base class
base_packet. As such, methods computeJ)artial_crc() and geCpartiaLcrc() are provided for
updating and accessing this data value. crc is visible inside the derived class packet but not
outside this class. As such, methods compute_crc() and geCcrc() are provided in class packet
for accessing the combined crc value. Neither partial_crc nor crc can be accessed outside
these class definitions.
112 SystemVeriiog as a Programming Language
CHAPTERS System Verilog as a
Verification Language
The scheduling semantics of System Verilog are extended in order to allow for schedul-
ing of new language constructs (e.g., property evaluations) while maintaining backward
compatibility with Verilog. Cycle-based verification approaches are facilitated through the
introduction of clocking blocks that allow for automation of sampling and driving of design
signals with respect to sampling clocks. Separation between testbench and design is facili-
tated by the introduction of program blocks. Inter-process synchronization is enhanced
through the introduction of mailboxes and semaphores, thereby simplifying the interaction
between independently running processes. SystemVerilog also provides new constructs for
constrained random generation, property and assertion specification, and coverage collec-
tion.
The scheduling semantics in System Veri log is fully backward compatible with the schedul-
ing semantics of Verilog. However, these semantics have been extended to provide support
for the new constructs in SystemVeriiog (e.g., program block, sequence evaluation). A good
understanding of the scheduling semantics of System Verilog facilitates better understanding
of the detailed operation of these new language constructs, and will help in avoiding poten-
tial programming pitfalls.
The semantics of SystemVerilog have been defined for event-driven simulation. A pro-
gram consists of threads of execution (e.g., process blocks, concurrent statements, forked
processes) that assign new values to data objects. A thread of execution is first started either
statically at simulation start time or dynamically by using fork-join statements (section 4.6).
A thread of execution is suspended either when its execution reaches the end of a block or by
an explicit wait for an event or delay period (e.g., "@rcv_pkt;" waits until event rcv_pkt is
triggered, "#3"s;" waits for 3 ns). The evaluation of a sleeping thread of execution is restarted
either when an explicit sleep time in its sequential flow has passed or when the conditions for
a wait statement are satisfied. A new evaluation thread is started either statically (e.g., initial
block), upon changes in any of the signals in a sensitivity list (e.g., always block), or through
fork-join statements. In this flow, time is advanced when restart times of all threads that are
scheduled to be restarted are at a time in the future. The next simulation time, is consequently
the next immediate time at which an execution thread restart is scheduled.
Each thread of execution evaluates its expressions by accessing values of data objects
used in expressions and updating data objects that appear on the left-hand side of these
expressions. Data values can be updated either with blocking assignments or non-blocking
assignments. A blocking assignments takes effect (i.e., the result of evaluating the expression
on the right-hand side of the assignment is moved into the data object on the left-hand side of
assignment) immediately during the running of its thread of execution, and as that assign-
ment is being carried out, whereas a non-blocking assignment takes effect only in the NBA
scheduling region (figure 5.1). In other words, the result of evaluating the expression on the
right-hand side of a non-blocking assignment is not moved into the left-hand side data object
until its thread of execution is suspended. As such, the value of a variable assigned using a
blocking assignment is immediately visible within the thread of execution after the assign-
ment is made, but the value of a variable assigned using a non-blocking assignment becomes
visible only after its execution thread is suspended.
A time-slot at a given simulation time Ts is the abstract unit of time within which all
thread restarts and data object updates for time Ts take place. In SystemVerilog, thread
restarts are given different priorities depending on the language construct that created the
thread (e.g., always block, assertion action block, etc.). In addition, data object updates are
grouped according to their update mechanisms (i.e., blocking vs. non-blocking). The sched-
uling semantics of SYstem Verilog uses the concept of scheduling regions to describe the
mechanism and ordering of thread restarts and data object updates within a given time-slot.
Figure 5.1 shows the System Verilog simulation reference model and its predefined schedul-
ing regions.
Scheduling Semantics 115
from previous
time-slot
Iterative
Region
to next
time-slot
Scheduling regions in this flow consist of preponed, active, inactive, NBA (non-block-
ing assignment), observed, reactive, and postponed. Except for observed and reactive
regions, this flow essentially duplicates the standard simulation reference model in Verilog.
The preponed region is specifically used as a PLI callback control point that allows PLI rou-
tines to access data at the current time-slot before any net or variable values are changed.
these thread evaluations progress beyond the "#0" statement. Once all threads that are
restarted in the active region are suspended, all threads scheduled for restart in the inactive
region are moved to the active region, and iteration repeats by restarting the flow in the
active region.
pass/fail segment of a clocked assertion do not lead to triggering its clocking expression in
the current time-slot.
Interaction with a DUV for verification purposes often follows cycle-based semantics where
DUV signals are sampled and driven based on a clocking event. For verification purposes, it
is usually a good practice to take advantage of the setup and hold time specifications of a
design interface by driving DUV inputs slightly before a sampling event, and sampling out-
puts slightly after a clocking event in order to bypass any simulation-induced, spurious sig-
nal transitions at the exact time of the sampling event. An additional benefit offollowing this
approach is that using the actual setup and hold time requirementS of a design in sampling
outputs and driving inputs leads to verifying the required setup and hold time behavior of a
DUV.
As an example, consider a DUV whose outputs should be read at 2ns before the positive
edge of system clock and inputs should be driven 1ns after the positive edge of the same
clock (figure 5.2).
Sample at i Drive at
3";]~~]F
Figure 5.2 DUV Input/Output Sample and Driving Timing
The following program segment shows one approach for implementing this beha\'ior:
'p;;;gram 5.1: Signal sampling and setup/hold delays without a clocking block
1 module top:
2 bit clk=1;
3 reg [7:01 duvJn. duv_out, duvJo:
4 initial for (int i = 0: i <= 10; i++) #5ns elk = !clk: 1/ Generate elk
118 SystemVeriiog as a Verification Language
5
6 always @(negedge elk) begin
7 #3ns;
8 $display (duv_io, duv_out); /I read signal values at posedge - 2ns
9 @(posedge elk);
10 #1ns;
11 =
duv_in 10; 1/ write signals at posedge + 1ns
12 duv_io =20;
13 end
14 endmodule
In this example, an always block is used to wait for 3ns after the negative edge of clock
(line 7) so that design outputs can be read 2ns before the positive edge of clock, which occurs
at time 10n5 (line 8). Design input signals are then driven 1n5 after the positive edge of the
clock (lines Ii, 12).
SystemVeriiog provides the clocking block construct to facilitate easy and straightfor-
ward implementation of this type of interaction with a DUV A clocking block can only be
instantiated in a module, program block, or interface.
A clocking block is identified by the following aspects:
• Name: Name of the clocking block
• Clocking event: Event used as the reference for setup/hold time calculations
• Unclocked signals: Hierarchical name for environment signals that are to be sampled
or driven by the clocking block
• Clocked signals: A direction and an optional name for each unclocked signal man-
aged by the clocking block
• Default input/output skews: Default delay values, with respect to clocking events,
specifying timing for driving outputs and sampling inputs
• Skew overrides: An optional input and/or output skew override for each clocked sig-
nal
Figure 5.3 shows the relationship between the input skew, the output skew, and the
clocking event defined for a clocking block. The event used for sampling the inputs is
derived by assuming the input skew to be a negative value. The output driving event is
derived by waiting for output skew time after the clocking event.
Figure 5.4 shows a pictorial view of the function of a clocking block. Any DUV or test-
bench signal can be managed by a clocking block both for sampling and driving. A clocking
block implicitly defines output d~iving clock and input sampling clock. Output driying clock
is triggered output-skew-de~ay hme after the clocking eYent, and input sampling clock is
triggered input-skew-delay time before the clocking e\'ent. A clocking block allows separate
Clocking Blocks 119
output and input skews to be assigned to any clocked signal. As such, any clocked signal has
its own dedicated and implicit output driving and input sampling clocks.
Clocking Block
Any value written to a clocked signal of a clocking block is assigned to its associated
unclocked signal when its output driving clock occurs. This means that multiple writes to a
clocking signal are filtered and only the last assignment to the clocked signal before the
arrival of the output driving clock is driven to its associated unclocked signal. Reading from
a clocked signal of a clocking block supplies the last value of its associated undocked signal
before its last input sampling event occurred. As such, all transitions in an undocked signal
between two occurrences of input sampling clocks are filtered out by the clocking block and
only the last value of the unclocked signal before the last input sampling clock is available at
the clocked signal.
The separation between clocked and their associated unc10cked signals leads to some
unexpected results. For example, writing from a clocking block inout element and reading
the same value will not produce the value that was just written because the returned value is
a value sampled when the input sampling clock had occurred (figure 5.4).
A clocking block supports input, output, and inout clocked signals. Configuration
shown in figure 5.4 applies to inout elements. Input elements only have the input direction,
output elements only have the output direction behavior.
Program segment 5.2 below shows an example of a clocking block:
'Program
L 5.2: Sample clocking
________________ __ block implementation
Line 1 shows the clocking block name and clocking event. Line 2 shows the default
input and output skews. Lines 3-5 show examples of input, output, and inout element spec-
ifications. Each specification consists of the signal direction, clocked signal name (e.g.,
cb_out on line 3), and the unclocked signal (DUV or testbench signal) associated with this
clocked signal (e.g., duv_out on line 3). Note that the unclocked signal can be specified using
a hierarchical name (e.g., top.module.duvJo on line 5). Also, if the unclocked signal name is
omitted, it is assumed to be a signal with the same name as the clocked signal and in the
same scope as the clocking block (e.g., duv_ln on line 4). If the input and output skews are
different for an element, these skews can be specified explicitly. Input skew override is spec-
ified for input cb_out on line 6. Output skew override is specified for output duv_ln on line 7,
and input and output skew overrides are specified for inout cb_io on line 8.
The following program shows the use of clocking blocks to improve the implementa-
tion of the sampling and driving example shown in program 5.1:
'Program 5.3: Signal sampling and setup/hold delays with a clocking block
1 module top;
2 bit clk=l;
3 reg [7:0] duv_in, duv_out, duv_io;
4
5 initial for (int i = 0; i <= 10; i++) #5ns clk = !clk;
6
7 clocking cb @(posedge clk);
8 default input #2ns output #1 ns;
9 input cb_out = duv_out;
10 output cb_in = duvJn;
11 inout cb io = duv io;
12 input #los cb_out;
13 output #1 ns cb_in;
14 input#lns output#3ns cbJo;
15 endclocking
16
17 always @(posedge elk) begin
18 $display (cb.cb_io. cb.cb_out); /I read signal values at posedge - 2
19 cb.cb_in <= 10; /I write signals at posedge + 1
20 cb.cbJo <= 20;
21 end
22 end module
As shown in this example, the always block on line 17 is sensitive to the positive edge
of the clock elk. Upon entering the always block, the values of cb.cb_io and cb.eb_out are
read. The values read at this time, however, are those read 1ns before the positive edge of the
clock. Also, the values written to cb.eb_in and cb.cb_io on lines 19 and 20 will be applied to
duv_in and duv_io at 1ns and 3ns respectively after the positive edge of the clock.
Example 1 shows the use of keyword #1step, which is a special SystemVerilog con-
struct for specifying that the input should be sampled in the postponed region of the previous
time-slot of the clocking block sampling event. This is essentially the same as sampling the
input in the preponed region of the time-slot for the sampling event of the clocking block
(section 5.1). Example 2 shows an absolute value input skew. Example 3 shows that the
sense of the clock can be changed from that of the sampling event for the clocking block. For
example, if the sampling event for the clocking block is (posedge elk), then the input skew
can be set to negative edge. Example 4 shows that the input skew can be set to an absolute
delay offset from a changed sense of the. sampling event. Example 5 shows the use of #0 in
specifying the skew. Inputs with explicit #0 skew are sampled in the observed region of the
time-slot for the clocking block sampling event. Outputs with explicit #0 are driven in the
NBA region of the time-slot for the clocking block sampling event. The default input skew is
#1step. The default output skew is #0.
lines migrate directly to designs created in SystemVerilog and as such, dealing with race
conditions is not a main concern when creating designs in SystemVerilog.
For multiple reasons, however, the situation is different for verification engineers:
Verification engineers follow a more software-oriented programming style where the
notion of combinational versus sequential does not directly apply. This means that
selective use of blocking versus non-blocking assignments is not a natural fit for
avoiding races in testbenches.
Detailed evaluation order of design signals is not an immediate concern for verifica-
tion engineers. Verification engineers are more focused on verifying correct
cycle-based operation of the design, and as such prefer to avoid having to deal with
race conditions due to scheduling semantics of the simulator.
Introduction of new verification related features such as assertions and properties,
whose evaluation semantics are a new addition to SystemVerilog, require special
considerations for how testbench evaluation is scheduled along with the design.
Consider the following design and its associated testbench:
The design to be verified is duv (lines 1-5). The testbench is implemented in module
test.The design assigns to sigb the complement of slga on every positive edge of clk. The
intention of this testbench is to take signal 51gb, produced at the output of the design and
assign it back to its input slga as the next test vector. This is accomplished on line 15 by mov-
ing the output value to the input at every positive edge of the clock.
The example, as shown above, has a race condition which leads to incorrect simulation
results. The race condition is due to the non-blocking assignment on line 15 using the previ-
ous value of 51gb before it is updated in the current cycle. As such, the value moved to the
input of the design is not the value produced in this cycle but its previous value. It may seem
that by changing the non-blocking assignment on line 15 to a blocking assignment, the prob-
lem would be solved, but this is not guaranteed. This is because the always blocks on lines 3
and 15 may be executed in any order at the positive edge of clk. A better fix would be to
change the sense of clock on line 14 to negedge which would then lead to correct simulation
results.
Program Block 123
As shown, the problem with this testbench can be solved after some analysis, but the
real issue is that first of all, these types of problems are very difficult to analyze for anything
but the most non-trivial designs. Second, it would be best to avoid getting into these types of
problems in the first place. SystemVeri log introduces the program block so that such prob-
lems can be prevented with ease.
The fundamental goal in introducing the program block is to provide verification engi-
neers with a well defined ordering for the evaluation of design versus testbench. In order to
achieve this goal, the following properties have been defined for a program block:
• A program block can contain only type and variable declarations and one or more ini-
tial blocks.
• A program block cannot contain always blocks, UDPs, modules, interfaces, or other
program blocks.
• Variables inside a program block can only be assigned from inside the same or other
program blocks (e.g., assignment to a program block variables from a module is i1\e-
gal).
• Variables inside a program block can only be assigned using blocking assignments.
• From inside a program block, variables on the outside that are not in any program
block, can only be assigned using non-blocking assignments.
• Statements from within a program block that are sensitized to design variables (e.g.,
@posedge of deslgn.elk) are scheduled for evaluation in the reactive region.
• Initial blocks inside a program block are scheduled for evaluation in the reactive
region. This is in contrast with initial blocks inside other blocks that are scheduled for
evaluation in the active region.
The benefits gained because of these properties are described in the following bullets.
In this description, design signals refer to any variable declared outside of program blocks.
• Suspended program block processes (e.g., an initial block suspended due to a wait on
delay or event) are scheduled to be restarted in the reactive region. An immediate
consequence of this property is that in any time-slot, the execution of code inside the
program block is started only after all design signal transitions, due to any source
other than the program block (e.g., scheduled from previous time-slots, O-delay prop-
agations in the current time-slot, etc.) are finalized.
• All program block variables are assigned using blocking assignments and all design
signals driven from inside the program block are assigned using non-blocking assign-
ments. An immediate consequence of this property is that once suspended processes
inside program blocks are restarted, all signal evaluations inside a program block are
finalized before any ofthe changes they cause in design signals cause design pro-
cesses to be restarted. The reason is that blocking assignments made inside the pro-
gram block take effect in the active region, and non-blocking assignments used for
design signals take effect in the NBA region.
• These interactions produce the following order of evaluation for program blocks:
• After entering the current time-slot, all design signal changes due to sources
other than the program block are finalized.
• Next, program block code is executed using the latest values of design signals in
the current time-slot.
• Next, all changes in program block variables are finalized. where all new values
124 SystemVeriiog as a Verification Language
In this program, because ofthe semantics of the program block described in this section,
assignment on line II is guaranteed to take place after assignment on line 4 is completed,
even though they are both sensitized to the positive edge of the clock.
Design task and functions called from inside a program block are evaluated in the reac-
tive region. This is in contrast to design task and function calls from inside a design which
are evaluated in the active region. This means that in a given time-slot, the effect of
non-blocking assignments are visible to design tasks and functions called from a program
block but not visible to design and task functions called from outside the program block.
Additionally. a design task that suspends one or more times before returning (e.g., due
to a wait for delay or on eyent) and that is called from inside a program block follows the
scheduling semantics of the program block until it is suspended for the first time. However,
Inter-Process Communication and Synchronization 125
from that point on, it follows the scheduling semantics of its containing block. This behavior,
ifnot used carefully, can lead to unexpected results.
5.4.1 Events
Events are objects that can be triggered, and waited on to be triggered. System Verilog
enhances the definition of Verilog events by defining an event to be a pointer to a synchroni-
zation object. As such, mUltiple event names can point to the same synchronization object,
event pointers can be assigned to one another, and event pointers can be passed to functions
and tasks. SystemVeriiog events have the following properties: .
• Can be triggered in the active region using the blocking trigger operator "->"
• Can be triggered in the NBA region using the non-blocking tri~er operator "-»"
• Can be waited on using the@ operator
• Can provide a persistent state in addition to its event trigger, lasting throughout the
time-slot in which it was triggered, that can be waited on using a wait statement
• Event variables can be assigned to one another
• Event variables can be passed to tasks and function and returned by functions
• Event variables can be compared using equality and inequality operators
The following program shows an example of using event triggering and waiting to syn-
chronize between two process blocks, and also using event pointers as task arguments.
7 #1000 $finish;
8 end
9
10 always @(e1) begin
11 #10 trlgger_event(e2);
12 $display ("e1", $time);
13 end
14
15 always @(e2) begin
16 #10 trigger_event(e1);
17 $display ("e2", $lime);
18 end
19 endmodule
This example shows the implementation of task trigger_eventO that triggers the event
object that its argument identifies. Task trigger_eventO is then used to trigger different events
(lines 11, 16). Note that in this example, initial triggering of event e1 (line 6) is delayed by 1
time unit in order to remove the race condition between triggering and checking of event e1'
If on line 6, e1 is triggered at time 0, then this triggering may occur after the always block on
Ime 10 is activated, in which case, the trigger will not be seen by the always block and event
e2 never gets triggered.
The following program shows the use of persistent state of an event to eliminate this
type of race condition.
On line 5, the first thread waits on persistent triggered state of event e1. The persistent
triggered state of an event remains active throughout the duration ofthe time-slot in which it
occurs. As such, this trigger will be visible by any thread waiting to be executed in the same
time-slot (in this case thread 2 at line 15) regardless of which one is started first. Using the@
operator on line 5 may lead to a race condition where if thread 2 is started first, then the trig-
gering of e1 on line 15 will be missed by thread I. This program also shows examples of how
event pointers can be assigned to one another and compared against each other or the null
value. Note that in this example, the synchronization object pointed to by variable e2 on line
11 is the synchronization object that was initially pointed to by variable e1.
5.4.2 Semaphores
Semaphores are used for the following purposes:
• Mutual exclusion
• Thread rendezvous
SystemVerilog provides a built-in semaphore construct. SystemVerilog provides the
following methods for this construct:
function new (int keyCount = 0);
task put(int keyCount=1);
task get(int keyCount=1);
function trLget(inl keyCount=1);
Function new() is used to allocate a new semaphore. Tasks get() and put() are used to
procure and return keys to the semaphore. Function try_get() is used to check whether or not
the requested number of keys are available in the semaphore.
Use of semaphore in implementing process synchronization is discussed in the follow-
ing subsections.
12 endtask
13
14 initial
15 fork
16 drlver("AgentA", 10,30);
17 driver("AgentB", 20, 40);
18 join
19 end module
First, note that task drlverO on line 5 is specified to have automatic variables by default.
Otherwise, given that the default mode for task variables is static, multiple simultaneous
calls to this task would lead to problems. As shown in the example, the get() method of
semaphore is called before entering the active section and the put() method of the semaphore
is called after leaving the active section. This example produces the following output:
10 AgentA entering active section
40 AgentA leaving active section
40 AgentB entering active section
80 AgentB leaving active section
At time 0, the fork statement on line 15 starts two processes, each calling task drlverO
with different delay parameters. Given that process AgentA has a shorter initial delay time of
10 (line 6), it acquires the semaphore first (line 7). Process AgentB finishes its initial delay at
time 20, and attempts to acquire a key from the semaphore, but suspends since the key is
already held by process AgentA, which is still in its active section. At time 40, process AgentA
leaves the active section and returns its key to the semaphore (line 11). At this time, the key
becomes available to process AgentB, which then enters the active section at time 40, leaving
it at time 80, and returning the key to the semaphore.
Note that any number of processes can be started by calling driverO and this implemen-
tation would guarantee mutual exclusion across all processes for the active section in this
task.
The following program shows how a rendezvous between multiple threads is imple-
mented using two semaphores:
Inter·Process Communication and Synchronization 129
Note that both semaphores are initialized with zero keys (lines 2, 3). In implementing a
rendezvous, producers issue a putO on sem1 and a getO on 5em2 before assuming their prod-
uct is consumed, and consumers issue a getO on sem1 followed by a putO on sem2 before
assuming a product is available. This program produces the following output:
20 Consumerl: Got product, now consuming
20 Producerl: Product consumed, now making a new one
60 Consumer2: Got product, now consuming
60 Producerl: Product consumed, now making a new one
100 Consumerl: Got product, now consuming
100 Producer1: Product consumed, now making a new one
140 Consumer2: Got product, now consuming
140 Producerl: Product consu med, now making a new one
Note that two slow consumers (turnaround time of 80) are started for a relatively faster
producer (turnaround time of 40). Also note that any number of producers and consumers can
be started in the fork statement (line 28) and the rendezvous mechanism would still guaran-
tee that one consumer proceeds for anyone produced item.
A simple case for using the above approach is where a thread can continue beyond point
A in its program flow only when another thread has reached point B in its program flow and
vice versa. Tn this case, the first thread can be assumed to produce when reaching point A,
and the second thread is assumed to consume when reaching point B.
130 SystemVeriiog as a Verification Language
5.4.3 MaHboxes
Mailboxes are used for passing messages between independently running processes. Mail-
boxes provide the following features:
• Ability to pass messages from one process to another
• Behave like a FIFO, where messages placed first are retrieved first
• Can have bounded or unbounded size
• Message sender can suspend if a bounded mailbox is full
• Message receiver can suspend until a message becomes available
SystemVerilog provides a built-in mailbox construct. This construct provides the fol-
lowing methods:
function new (int bound = 0);
function int numO;
task put(slngular message);
function Int tryJlut(singular message);
task get(ref singular message);
function try_get(ref singular message);
task peek(ref singular message);
function int try_peek(ref singular message);
Function newO is used to allocate a new mailbox. Function num() returns the number
of messages in the mailbox. Tasks putO, getO, and peek() are used to place a message in a
mailbox, retrieve a message from a mailbox, and retrieve a copy of a message from a mail-
box without removing it. These tasks suspend execution if the mailbox is full (for put()) or if
the mailbox is empty (for getO and peekO). Tasks getO and peekO produce a runtime error
message if the mailbox content message type does not match their argument type. Functions
Iry...put(), try_getO, try"'peekO return 1 upon success, and 0 when the operation fails. Func-
tions try-KetO and try"'peekO return ·1 when a type mismatch is detected between mailbox
content and the argument passed to these functions.
SystemVerilog mailboxes can be parameterized where the message type of the mailbox
is specified when the mailbox is declared. This allows for message type checking to occur
during compilation and prevents runtime errors because of message type mismatches.
The following program shows an example of using a mailbox to pass messages between
a producer and a consumer.
I Program
5.11: Using a mailbox to pass messages between processes
1 module top;
2 class envelope;
3 int letter;
4 function new(int Itr); lelter=ltr; endfunction
5 end class
6
7 task sender(input string name, input time delay1);
8 envelope envlp;
9 int count;
10 while (1) begin
11 #delay1;
12 envlp = new(count);
13 mbox.put(envlp);
14 $display ($time. name, ": sent msg: ". count);
15 count ++;
16 end
Constrained Random Generation 131
17 endtask
18
19 task receiver (input string name, input time delay1);
20 envelope envlp;
21 while (1) begin
22 #delay1;
23 mbox.get(envlp);
24 $display ($time, name, ": received msg: ", envlp.letter);
25 end
26 endtask
27
28 mailbox mbox;
29
30 initial begin
31 mbox =new(O);
32 fork
33 sender("Producer1", 10);
34 receiver("Consumer1", 35);
35 #50;
36 join_any
37 $display("Letters in mailbox before exit: ", mbox.num());
38 $finish;
39 end
40 end module
This example shows mailbox mbox declared (line 28) to transfer messages of type class
envelope (lines2-5). Tasks sender() and receiver() are defined to place objects of type envelope
inside mbox (line l3) and collect objects of type envelope from mbox (line 23), respectively.
Task sender() operates faster than task receiverO so that objects of type envelope accumulate
inside mbox. Tasks sender() and receiverO are started in independent processes (lines 33, 34)
and tenninated at time 50ns. Before ending the simulation, function num() is used to print the
number of objects inside mbox.
5.6 Property
_. --_._--_
_."
Specification
---
.. _.-.----_and
__ Evaluation
- ---
.-_._., ,-.- -------------_ .... ... .-_--------_.-._---_._--_.. - - - - - ---_. __._---
Property specification and evaluation is presented in chapter 15.
PART 3
Open Verification Methodology
134
CHAPTER 6 DVMInfrastructure
1. OV~I is an open source veritication methodology and class library developed jointly by Cadence
Design Systems and Mentor Graphics Corporation. Visit http:'/w\\,w.ovmworld.org to download the OVM
package and to join the 0'"\1 community of users and contributors.
136 OVM Infrastructure
This chapter describes an overview of the OVM class librari, and provides details of
the core utilities provided in the library. The use of the OVM class library for building the
verification environment hierarchy is described in chapter 7. Implementing transaction inter-
faces with the OVM class library is described in chapter 9. Transaction sequence generation
using the OVM class library is described in chapter 8.
, The presentJti<'1l <,ftbe 0'·\1 class library in this book is based <'n 0\"\1 1.tl rekas".
OVM Class Library Features 137
environment blocks. A verification class library should define the classes that provide these
transaction level models.
Complex scenarios require the creation and synchronization of complex sequences of
transactions, advancing in lock-step at multiple interfaces of the DUV. The modeling and
creation of these complex scenarios is not a trivial task and requires its own te,chniques and
utilities. A verification class library must provide the base classes and the necessary infra-
structure for supporting the creation of such complex scenarios.
In System Veri log, simulation time is advanced without any consideration of abstract
phases that may exist in the verification flow. Progression of time in a verification environ-
ment is, however, managed in phases where different sets of activities must take place in
each phase (e.g., initialization phase, running phase, checking phase). A verification class
library should define the infrastructure necessary for managing this high level view of simu-
lation phases and the implicit synchronization requirement it imposes on concurrently run-
ning threads in different blocks of the environment.
Producing informative messages about the state of verification is an essential part of
carrying out verification activities. Messages may be produced in different blocks and at dif-
ferent levels of severity. A verification class library should define the base utilities for speci-
fying such severity levels and enabling or disabling the generation of such messages.
Features of the OVM class library are defined to address these requirements ofa verifi-
cation related class library. These features are summarized in the next section.
~
5'
@
;
;r
1/1
ovm_objecl
2
!l
c::
@
0)1 ov~
~
I l~~quence item
I
--~"'~ 'I
I
Il"".m _ compone~t-regiSI
L___ . ._....._...__
I I ~L--I -----'
I
10 E~uence
- _......_ .._. _ _ _ _1
ovm_re~rsp_sequence I
ability to create a hierarchical environment. In addition, this feature introduces the ability for
each block to have its own thread of execution, and also introduces the concept of phasing by
dividing the life cycle of each thread into multiple phases (e.g., run, extract, report, etc.), and
synchronizing the start of each phase across multiple threads. In contrast with the phasing
where the life of a thread is divided into multiple stages, thread synchronization features
allow for synchronization between any two or more threads (e.g., using a semaphore to guar-
antee mutual exclusion between two or more threads). Verification environment components
provide containers that model typical components in a verification environment (e.g., agents,
monitors, scoreboards, etc.) and provide the control mechanisms for configuring, building,
and controlling the simulation flow.
The OVM verification methodology promotes the use of transaction-based communica-
tion for modeling traffic flowing between components. To support this methodology, the
OVM class library provides transaction objects for modeling traffic between components,
transaction interfaces and transaction channels for modeling connectivity between compo-
nents, and sequence interfaces to facilitate the generation of verification scenarios.
Figures 6.1 and 6.2 show an overview of the class inheritance hierarchy for the OVM
class library. The classes shown in these figures are placed into groups that correspond to
groups defined in the beginning of this section. Appendix A provides a full listing of all class
declarations in the OVM class library, indicating the parent class, parameters for each class,
and whether a class is a virtual class.
The core utilities of the OVM class library are described in this chapter.
Derived Class
This example defines base_pkt, a base packet data structure containing fields name (line
3) and payload (line 4), function newO (line 5), and virtual function printO (line 6). Derived
classes eth_pkt and usbJ'kt each provide a different implementation of virtual task printO
(lines 11, 17). Each derived class also provides a different constraint for field payload of base
class baseJ'kt (lines 10, 16). This example also shows the implementation of simple_factory, a
simple packet factory that provides function create_pktO (lines 22-27). This function returns
a newly created packet data object whose type and name are provided by arguments
pkCtype_name and pkt_name respectively. Note that this function is implemented to return an
object of type base_pkt but because of polymorphism (section 4.7.4), this function can also
return an object whose class type is derived from base_pkt (i.e., a specialized type). This
means that in this example, function create_pkt() can return objects having types eth_pkt (line
24) and usb_pkt (line 25) since these classes are derived from class base_pkt.
Function create_pkt() (line 22) is defined as a static function, therefore this function can
be called by using the class identifier (i.e., simple_factory) and the class scope resolution
operator "::". This approach allows function create_pkt() to be called without creating an
instance of this object factory.
This factory is used on line 32 to create a packet of type eth_pkt with name packet1,.
Randomizing this packet uses the constraint block specified for class eth_pkt (line 10). There-
fore, randomizing this object sets the value of field payload to value 10. The use of this fac-
tory for creating an object of type usb_pkt is shown on line 36.
Note that function prlnt() is defined as a virtual function in the base class base_pkt (line
6). As such, calling this function through a pointer to an object of type base_pkt (lines 34, 38)
has the same effect of calling the actual definition of this function for the true type of the
object being pointed to by the pointer. This means that calling function pkt.prlnt() (lines 34
and 38) results in calling function print() of the derived class that pkt is pointing at.
The OVM library includes a built~in factory that provides the features presented in this
section for both registering classes with the factory and also for defining instance and type
overrides for registered classes. This factory can be used for the creation of objects and hier-
archical components. These features are described in the following sections.
The above program first defines class packet derived from base class ovm_object (line
I). This definition uses macro ovrn_objecCutils() to register class packet with the OVM fac-
tory (line 3).
The begin/end variation of macro ovm_object_utiisO can be used to apply field automa-
tion 6.4) to fields in this class. This usage is shown below:
Once a class is registered with the OVM factory, the following predefined methods of
the OVM factory are used for creating objects and specifying object type and instance over-
rides:
Methods of class ovm_factory:
static function ovm_object create_obJect(
string obUype. string ins\..path= ..... string inst_name=.... )
static function void seUnst_override(string insUd. string from_type. string to_type)
static function void set_type_override(string from_type. string to_type. bit replace=1)
static function void prinLall_overrrides(bit aIUypes=O)
11
12 $cast(pkt[ 1]. ovm_factory: :creale_object("packet", "top", "pp2"));
13 $cast(pkt[2J, ovm_factory::create_objecl("packet", "top", "qq3"));
14 $cast(pkt[3]. ovm_factory: :create_object("short_packet", "top", "pp4 "));
15 $cast(pkt[4], ovm_factory::create_object("long_packet", "top", "pp5"));
16 endtask
17
18 initial begin
19 ovm_factory::seUnst_override("top.pp·", "packet", "short_packet");
20 ovm_factory::seUype_override("packet", "shorCpacket");
21 ovm_factory: :set_type _override("packet", "long_packet");
22 ovm_factory::seUype_override("short_packet", "packet");
23 ovm_factory: :set_type _override("short_packet", "long_packet", 0);
24 ovm_factory: :seUype_ override("long_packet" , "short_packet");
25 ovm_factory: :print_all_overrides(1);
26 create-packetsO;
27 end
28 endmodule
Task create_packetsO (lines 7-16) shows the use of function create_object() ofOVM fac-
tory for creating new objects. This task creates five new objects. The creation of the first
object shows the details of object creation and pointer casting (lines 8-10). First, a pointer to
an object of type ovm_object is declared (line 8), then a new object is created as having type
packet, residing in block top with name pp1 (line 9), and then cast to pkt[O] which is a pointer
of type packet (line 10). The next four objects are created (lines 12-15) using a short hand
notation of directly casting the created objects to their corresponding pointers.
the type override specified on line 20 applies to instances "top.pp1", "top.pp2", and
"top.qq3", this type override applies only to "top.qq3" since the first two instances are
a match for the instance override specified on line 19.
• The default behavior can be changed ~o that a new type override does not take effect
if a type override has been defined previously. By passing a value of 0 as the last
argument of function seUype_override(), the type override on line 23 does not take
effect, since a type override is specified for type short_packet on line 22.
o Type overrides do not chain. This means that specifying a type override from packet 1
to packet2 and a type override from packet2 to packet3 does not result in a type over-
ride from packet1 to packet3 · In this case, any request for objects of type packet1 pro-
Object and Component Factories 145
duces an object of type packet2 , and any request for an object of type packet2 produces
an object of type packet 3 . The same rule applies to instance overrides.
Object types generated by the above example without overrides (excluding lines 19-24)
and with overrides is summarized below:
Name No Override With Overrides
pkt[O] pp1 packet short_packet (line 19)
pkt[1] pp2 packet short_packet (line 19)
pkt[2] qq3 packet long_packet (line 21)
pkt[3] pp4 short_packet packet (line 22)
pkt[4] pp5 long_packet short_packet (line 24)
These methods call method create_objectO of the OVM factory with the appropriate
value for its inst_path argument derived from a component's hierarchical position. Compo-
nents are class objects that represent hierarchical blocks and as such, include an explicit
pointer to their containing block (i.e., parent component). Function create_componentO cre-
ates a new component of type comp_type and name inst_name assumed to be structurally con-
tained in the component from which this function is called (i.e., while this function calls
function create_componentO of ovm_factory, the inst_path of the new component is given as
the full hierarchical path of the calling component). Function create_objectO creates a new
object of type obLtype and name insCname, assumed to be structurally contained in the com-
ponent from which this function is called. Functions seUnst_overrideO and set_type_overrideO
behave the same as functions in ovm_factory except that parameter inst_path passed to
seUnsCoverrideO of ovm_component is given relative to the hierarchical path of the compo-
nent from which this method is called (i.e., parent component).
In OVM, the root of a component hierarchy represents a testcase that contains the veri-
fication environment, as well as any customization of this environment for that specific test.
The top level component representing this test is created by calling global function run_testO
(section 7.4). The name of the class representing the top level component is either passed
directly to this function as an argument, or specified as a plus-argument (section 7.4.1). This
function uses global function create_component() defined by OVM. This function can also be
used directly to create a top level component when no other component yet exists:
Methods for class ovm_factory:
static function ovm_component create_component (
string camp_type, string insLpath="", string inst_name, ovm_component parent):
146 OVM Infrastructure
The following example shows the use ofOVM factory to register and create verification
environment blocks derived from class ovm_component, and examples of overrides that can
be specified when using this factory. It should be noted that this example is provided to show
the use of OVM factory features, and the creation of a verification environment hierarchy
according to OVM guidelines is shown in section 13.6.
The above example defines class child_camp derived from class ovm_component. This
class is registered with the GVM factory by using macro ovm_component_utiisO (line 8). The
predefined function create_object() of class ovm_component is used to create three instances of
types short_packet, long_packet, and short_packet respectively (lines 12-14). Also note that
the argument list for function create_objectO of ovm_component is different from the one
defined for the GVM factory (as shown in section 6.3.2) where it does not require the name
of the containing component. The reason is that function create_objectO of ovm_component,
automatically uses the name of the calling component (i.e., parent component) as the name
of the component containing the newly created object.
Class is also derived from ovm_component and contains an instance of the
root_comp
class This class is registered with the OVM factory by using macro
child_camp.
ovm_component_utils() (line 20). Functions seUnst_override() and seCtype_override() of
ovm_component are called in the constructor for class rooccomp (lines 24, 25) before an
instance of class child_comp is created by calling function create_componentO of
Field Automation 147
ovm_component (line 26). Note that instance name for the instance override (line 24) is given
relative to the calling component.
The top level component is created by calling function create_componentO of the OVM
factory (line 32). The name of the top level module is used as the instance path of this new
component (i.e., "top"). Also, this is a top level component so the parent component is set to
null.
It should be noted that the OVM class library provides a specialized mechanism for cre-
ating the component hierarchy corresponding to a verification environment, and the imple-
mentation shown in the above example is provided only to illustrate features of the OVM
factory and the facilities it provides. Section 7.2 describes the steps for creating a component
hierarchy following the OVM guidelines. Section 7.4 describes the mechanism used for
instantiating the verification environment hierarchy which is done as part of running a test.
This example shows the declaration of class simple_packet derived from base class
ovm_seqence_ltem, containing fields error, size, ptype, addr, and data (lines 5-9). Field auto-
mation macros ovm_field_lnt(), ovm_field_enum(), and ovm_fleld_arrayJnt() are used (lines
13-16) to specify that field size should not be printed but fields ptype, addr, and data should
be printed when method print() (predefined for class ovm_object) is called (line 23). No macro
is specified for field error, and therefore, this field does not participate in field automation at
all. For all classes derived from ovm_object, field automation macros should be placed inside
predefined macro pair ovm_object_utils_begln() and ovm_object....utlls_end (lines 12 and 17).
The OVM class library provides field automation macros for different types of fields.
Table 6.1 shows a summary of field automation macros and the type of fields that are sup-
ported.
Field automation attributes (argument FLAG in table 6.1) are used to include or exclude
fields from the set of predefined operations. These attributes are shown in table 6.2. Multiple
attributes can be specified by combining attributes using the bitwise-OR operator (e.g.,
"OVM_ALL_ON I OVM_DEC"). Alternatively, attributes can be added arithmetically, but adding
the same attribute mUltiple times will result in unexpected behavior (e.g., "OVM_ALL_ON +
OVM_DEC + OVM_READONLY").
Core Utilities 149
The OVM class library defines composite attributes in order to simplify the setting of
common combinations of field automation attributes. These composite attributes are shown
in table 6.3.
~
~
~u
U f-<
~ ...l
<: u
>- ~
~ ~
~ ~
~
~ ~
;:E
0 ~
~ u Sol
Predefined Flags >- U ;:E
0
u ~ 1:1 ~ ~ (/)
~
~ (/)
0 ffi
f.I..
0 0 0 0 0 0 I:Q
ZI ~ ~ ZI 0 1 ~I
~\ ~ ~I
Z\ U ZI ZI ~
~I ~I
0 0 0
;:E ;:EI ;:E ;:EI
;> ;> ;> ;>
0 0 0 0
;:E
;>
0
;:EI
;>
0
gg g 0
;:EI
;>
0 0
OVM_DEFAULf il!l il!l il!l il!l il!l il!l ilQ ilQ
• Print
• Pack/Unpack
Object cloning is implemented by creating a new object and using the copy utility to set
the contents of the newly created object to the contents of the source object. Copy, compare,
print, and pack/unpack utilities are described in the following subsections.
6.5.1 Copy
OVM defines the following predefined methods to support the copy operation for classes
derived from ovm_object:
Methods defined for class ovm_object:
function void copy (ovm_object rhs)
virtual function void d030PY (ovm_object rhs)
Function copy() is used to copy all automated fields of rhs that are configured to partici-
pate in the copy operation. The recursion behavior of this function is controlled by the recur-
sion policy for object fields (e.g., OVM_SHALLOW, OVM_REFERENCE). Function do_copy()
provides a callback mechanism for handling fields of rhs that require special handling, or are
not configured by field automation macros to be included in the copy operation.
The following program shows an example of using these predefined functions:
35 pkt2.copy(pkt1);
36 end
37 end module
This example shows the implementation of hierarchical object hler_packet having fields
collection_time, addr, and pload. Field collection_time is not automated, but fields addr and
pload are both automated for all operations by using attribute OVM_ALL_ON (lines 26, 27).
Field pload is an instance of class payload having fields capacity and size. Both fields capacity
and size are automated for printing (lines 9, 10), but capacity is excluded from the automated
copy operation (line 9) since it should be copied only if its value is less than 10. Function
do_copy() of class payload is defined to perform this conditional copy operation for field
capacity. Note that function argument rhs should be casted to the derived type payload (line
14) so that access to the fields of the derived type are possible. Function do_copy() is called
before automated copy operations take place, therefore writing to fields that are automated
for copy operation (line 16) will not be visible outside this function.
6.5.2 Compare
OVM defines the following predefined methods to support the compare operation for classes
derived from ovm_object:
Compare Methods defined for class ovm_object:
function bit compare (ovm_object rhs, ovm_comparer comparer=null);
function bit do_compare (ovm_object rhs, ovm_comparer com parer);
Function compare() is used to compare all automated fields of rhs that are configured to
be included in the compare operation. The recursion behavior of this function is controlled
by the recursion policy for object fields (e.g., OVM_SHALLOW, OVM_REFERENCE). Function
do_compare() provides a callback mechanism for handling fields of rhs that require special
handling or are not configured by field automation macros to be included in the compare
operation.
The following program shows an example of using these functions:
This example shows the implementation of class simple_packet having fields addr and
data. Both fields are automated (lines 6, 7) but field addr is not included in the automated
152 OVM Infrastructure
compare operation (line 6). The implementation of do_compareO for this class compares the
values for addr only if data has a value of 1 (lines 10-14).
The use of compare operation is shown in the following program:
'Program 6.10: Comparing two objects using the bullt·ln function compareO
1 module top;
2 'include "ovm.svh"
3 'include "program_6.9" /I simple_packet class
4
5 simple_packet pkt1 = new;
6 simple_packet pkt2 = new;
7 initial begin
8 'message(OVM_LOW, (pkt2.compare(pkt1)))
9 end
10 end module
L--_______________________
The compare operation is carried out by calling the compare function of one of the
objects with the other object as its argument (line 8). Generally, the compare operation is
symmetric, This means that function compare() of either of the two objects being compared
can be called with the other object as an argument. In this example, however, the compare
operation is not symmetric because of the special implementation of function do_campa reO
where the value of field data of the calling object is used to guide the compare operation.
The compare operation makes use of compare policy object ovm_comparer, This object
contains settings for guiding the compare operation, as well as predefined methods for com-
paring objects or fields (one method for each type). It is possible to define a new policy
object that redefines the settings of the default comparer and hence achieve a modified com-
pare behavior. The following program shows an example of using a modified ovm_comparer
object:
In this example, class custom_com parer is derived from oYm_comparer. For illustration
purposes, the definition of this derived class lists properties that are available in class
oym_comparer and their default settings. The setting for these properties can be changed
either during class definition (lines 5-14) or before the policy object is passed to the compare
operation (line 21). In this example, field show_max of the compare policy object is set to
zero so that no compare results are printed, and only the result of calling function compare()
(line 21) is shown.
6.5.3 Print
OVM defines the following predefined methods to support the print operation for classes
derived from oYm_object:
Print Methods defined for class oYm_object:
function yoid print (oYm_printer printer=null)
function string sprint (oym_printer printer=null)
Yirtual function yoid do_print (oYm_printer printer)
Function print() is used to print all automated fields of the calling object that are config-
ured to be included in the print operation. Function print() recursively calls the print() function
of all automated fields that are objects derived from base class oYm_object. Function sprint()
is the same as printO except that it returns a string instead of printing the result to the screen.
Function do_print() provides a callback mechanism for handling fields of the calling object
that require special handling or are not configured by field automation macros to be included
in the print operation. The use of print() and do_printO functions is similar to the use model for
the copy operation.
The print operation makes use of print policy object oYm_printer. This policy object con-
tains settings for controlling the print operation, as well as predefined methods that custom-
ize the way object data is printed. It is possible to define a new print policy object that
redefines the settings of the default printer and hence achieves a modified print behavior. The
following print policy objects are automatically created by loading the OVM class library:
• Global object oYm_default_printer (initialized to oym_defauIUable_printer)
• Global object oYm_defauIUine_printer of type oYm_line_printer
• Global object oYm_default_tree_printer of type oYm_tree_printer
• Global object oYm_defauIUable_printer of type oYm_table_printer
If no printer policy object is passed to print functions (i.e., prlnt() and sprintO), then the
global printer oYm_defaulCprinter is used by default. Each printer policy object contains field
knobs that can be used for customizing the printer output. Table 6.4 shows a summary of
knob properties that can be configured.
The implementation of the print utility allows the output of print functions to either fol-
low the global setting of the printer policy object or to be customized depending on the local
requirements. Local customizations are done by instantiating a local copy of a printer policy
object and then modifying its properties before passing it explicitly to the print function. The
printer policy object that is passed to callback function do_print() is the same printer policy
object that is provided as argument to the print functions.
154 OVM Infrastructure
The following program shows examples of changing the default printer object, as well
as creating local printer objects:
~~~------------------
Program 6.12: Using printer policy objects with printO method
1 module top;
2 'include "ovm.svh"
3 class Simple_packet extends ovm_sequencejtem;
4 rand byte addr;
5 rand byte data;
6
7 'ovm_object_utils_begin(simple packet)
8 'ovm_fieldjnt(addr,OVM':::ALL_ON)
9 'ovm_fieldjnt(data,OVM_ALL_ONIOVM_NOPRINT)
10 'ovm_object_utils_end
Core Utilities 155
11 endclass
12
13 ovm_line_printer 10caUine_printer = new;
14 ovm_tree_printer locaUree.J)rinter = new;
15 ovm_table_printer 10caUable_printer = new;
16
17 simple_packet pkt = new;
18 initial begin
19 lIovm_defaulCprinter = ovm_defauIUable_printer; IIdefault, so not needed.
20 pkt.printO;
21 ovm_default_prlnter = ovm_defauIUine_printer;
22 pkt.printO;
23 ovm_default_printer = ovm_defauIUree_printer;
24 pkt.printO;
25 10caUable_printer.knobs.type_wldth = 20;
26 10caLtable_printer.knobs.value_width = 20;
27 pkt.print(locaUable_printer);
28 end
29 endmodule
This example shows the definition of class simple_packet with fields data and addr,
where automation macros are used to include addr in (line 8) and exclude data from (line 9)
the print output. This example shows the creation of local printer objects local_line_printer,
local_tree_printer, and locaLtable_printer (lines 13-15). These objects can be modified locally
to change the default print behavior. By default, ovm_default_prlnter, is assigned to
ovm_defaulctable_prlnter (line 19). Calling printO without any arguments results in using
ovm_default_printer (line 20). The global default printer is assigned to the global default line
printer ovm_defauIUlne_printer (line 21). Therefore, calling function prlntO on line 22 results
in ovm_defauIUine_prlnter being used as the default printer. The customization of printer
knobs (table 6.4) for the local printer object local_table_printer is shown on lines 25 and 26.
When calling printO with this local printer object (line 27), this customized local printer
object is used instead of the global default printer object.
Calling function packO of a class object appends the packed contents of this object to bit
array bitstream. The contents of the class object are packed according to the settings of, and
by using the helper functions provided in, packer. Function packO first processes all auto-
mated fields of a class object, followed by executing virtual callback function do_packO to
perform any special handling required by the model being implemented.
Calling function unpackO of a class object unpacks the contents of bit array bitstream
into the class object. The unpacking is performed according to the settings of, and by using
the helper functions provided in, packer. As with function packO, function unpackO first pro-
156 OVM Infrastructure
cesses automated fields of the class object, followed by executing virtual callback function
do_unpackO to perform any special handling required by the model being implemented.
The packing and unpacking utility of the DVM class library is implemented using the
helper functions provided by the class ovm_packer. This class provides the following methods
for packing and unpacking data types:
Methods defined for class Dvm_packer:
virtual function void pack_field_int (loglc[63:0] value, int size);
virtual function void pack_field (bitstream_t value, int size):
virtual function void pack_string (string value):
virtual function void pack_time (time value):
virtual function void pack_real (real value):
virtual function void pack_object (ovm_void value):
virtual function logic[63:0] unpack_field_int (int size):
virtual function bitstream_t unpack_field (int size):
virtual function string unpack_string 0:
virtual function time unpack_time ():
virtual function real unpackJeal 0:
virtual function void unpack_object (Dvm_void value):
virtual function int get_packed_sizeO:
virtual function bit is_null 0:
In the remainder of this section, an example is shown where these helper functions are
used in the definition of callback functions do_packO and do_unpackO to pack and unpack a
class object. The example shown in this section does not show any automated fields, but any
field that is automated for packing and unpacking is processed using the same helper func-
tions shown explicitly in this example. Class fields are included in and excluded from pack-
ing and unpacking by using automation attributes OVM_PACK and OVM_NOPACK, respectively
(table 6.2). It should be noted that all automated fields are processed before a callback func-
tion is executed.
The following program shows the definition of two class objects that will be used to
describe the packing and unpacking mechanism of the DVM class library:
This program shmvs the implementation of classes scalars and composites, each con-
taining the different types of class fields that may need to be packed or unpacked. Callback
Core Utilities 157
functions do_packO and do_unpackO are defined as extern functions that must be defined as
part of this implementation (lines 8-9, 17-18).
The implementation of function do_packO for classes scalars and composites is shown in
the following program:
Class scalars contains only non-composite fields, and the packing of each of its fields
into the packer object packer passed into this function is shown above (lines 2-6). In packing
integral fields, function pack_fieldO is used to pack an integral field of any size up to 4096 bits,
and function pack_field_intO is used to pack fields of at most 64 bits. In this implementation,
SystemVerilog system function $bitsO is used to get the number of bits in variables bits_ODDS
and bits_1024 (lines 5, 6).
Packing composite fields is shown for class composites (lines 9-23). The convention in
packing an array is to first pack the size of this array as an integer and then pack each mem-
ber of this array. This process is shown for array string_array (lines 12-15). First the size of
array is packed using function pack_field_intO (line 13) and then each member of the array is
packed using function pack_string() (line IS). The packed size value will be used during the
unpacking process.
A single object is packed using function pack_objectO. Calling function pack_obJectO on
field one_obJ results in calling function do_packO of that object (lines 1-7). The packing of an
object array (lines 19-22) follows the same flow as that of string_array.
The implementation of function do_unpackO of classes scalars and composites is shown
in the following program:
5 bits_Oooa = packer.unpack_fieldjnt($bits(bils_OOOa));
6 bits_1024 = packer.unpack_field($bits(bits_1 024));
7 endfunction
a
9 function void composites::do_unpack (ovm_packer packer);
10 int unsigned arr_slze;
11
12 arr_size = packer.unpack_fieldjnt($bils(arr_size));
13 for (int i=O; i<arr_size; i++)
14 string_array[i] = packer.unpack_stringO;
15
16 if (one_obj != nUll)
17 packer.unpack_object(one_obj);
18
19 arr_size = packer.unpack_fieldJnt($bils(arr_size));
20 for (int i=O; i<arr_size; i++)
21 obLarray[i] = newO;
22 packer.unpack_object(obLarray[i]);
23 end
24 endfunction
The implementation of do_unpack() for class scalars shows the unpacking of values from
packer object packer in the same order that these objects were packed (lines 1-7). Also, the
implementation of the unpacking process for composite objects in class composties follows
the same order that these objects were packed (lines 9-24). Array string_array is unpacked by
first unpacking the size of this array (line 12) and then using a loop to unpack each member
of this array. Calling function unpack_object() on field one_obj results in calling function
do_unpack() of class scalars (lines 1-7). The unpacking of an object array (lines 19-22) fol-
lows the same flow of unpacking arrays.
The use of packing and unpacking functions is shown in the following program:
This program shows that object cmpst1 is packed by calling its predefined function
pack() (line 12), The resulting stream is then unpacked into cmpst2 by calling its predefined
function unpack() (line 14). Calling functions pack() and unpackO in tum calls the hook meth-
ods do_packO and do_unpack() respectively.
In most applications. a data object is packed so that the packed bit stream can be applied
to a DUV interface. Similarly, unpacking is needed where a class object must be populated
by the bit stream collected from a DUV interface. In such cases, it is important that the
Core Utilities 159
unpacking flow mirrors the steps taken for collecting data from the environment, and that the
packing process satisfies the format needed for applying the packed stream into the DUV
interface.
Guidelines for creating the environment component hierarchy is in part related to the princi-
ples of object-oriented programming. These guidelines are:
• Self-containment
• Recursive construction
162 OVM Component Hierarchy
• Configurability
Self-containment ~uideline refers to the need that a component behavior or implementa-
tion be independent of any value not contained within its sub-hierarchy. This guideline is a
necessary requirement for achieving component reuse. The need for port-based connectivity
is a direct result of enforcing the self-containment guideline (see section 9.7 on transaction
interfaces). A clear example of where this guideline must be strictly enforced is in the imple-
mentation of a verification component.
Recursive construction refers to how the hierarchy is created. In this approach, each
parent component builds only its immediate children components, and the children compo-
nents in turn, build their own immediate children components. The recursive nature of this
approach leads to the creation of the full hierarchy where each parent component is created
before its child components are created. During this recursive construction process, the build
method of each component follows these steps:
• Local fields that can locally be decided are initialized (e.g., default values).
• Local hierarchy configuration fields (usually set to default values and initialized by
parent components as needed) are used to decide which children components must be
created.
• Each child component is created and then the child component's build mechanism is
activated to build its sub-hierarchy.
• Local fields that must be decided by the contents of children components and their
sub-hierarchies are initialized.
• Transaction interfaces that must link children components are connected.
Configurability refers to the ability for a component to configure the structure and
behavior of its component sub-hierarchy. The need for this ability is a direct consequence of
enforcing the self-containment guideline, since self-containment implies that a child compo-
nent is not aware of its parent component, and as such, configuration information should be
passed from the higher layers of the hierarchy to the lower layers of the hierarchy. When
using the recursive construction approach, the configurability challenge is that when a com-
ponent is starting to build its children components, the sub-hierarchy rooted at these children
components has not yet been created and, therefore, configuration fields within this sub-hier-
archy do not yet exist. This means that a dedicated mechanism is required for controlling
configuration fields deep within the sub-hierarchy when the sub-hierarchy is recursively con-
structed. Support for this feature is provided in the OVM class library.
Figure 7.1 shows a graphical view of verification environment elements from the per-
spective of hierarchy construction. A hierarchical component may contain any of the follow-
ing elements:
• Fields
• Regular
• Hierarchy configuration
• Behavior configuration
• Children components
• Fixed components
• Conditional components
• Reference components
Abstract View of the Component Hierarchy 163
• Children objects
• Source objects
• Cloned objects
• Reference objects
A re~ular field is used for the internal operation of a component. A hierarchv configu-
ration field is used to decide the hierarchical structure of a component (e.g., passive/active
flag in an interface verification component indicating whether or not an agent should contain
a driver and sequencer). A behavior configuration field is used to control the behavior of the
component (e.g., how many packets to generate in a sequencer). Hierarchy and behavior con-
figuration fields are set to default values and then initialized by parent components during
hierarchy construction.
A fIXed child component is always present in the component hierarchy (e.g., the monitor
in an interface verification component). A conditional child component is present if indi-
cated by a hierarchy configuration field (e.g., sequencer and driver components are present
only in an active interface verification component). A reference child comvonent is in fact a
pointer to a component that is physically located at a different level of the hierarchy (e.g.,
components at the lower level of the hierarchy need to access components that sit at higher
layers of the hierarchy).
A source child object is a container of a collection of data items (e.g., configuration set-
tings). A source child object is referred to as "source" since it is neither cloned nor a refer-
ence and hence contains the source of its content. A cloned child object is created by cloning
another object that is physically located at a higher level of the hierarchy (e.g., copying the
global configuration object to the local scope for local use and modification). A reference
child object is in fact a pointer to an object that is physically located at a different level of the
hierarchy (e.g., a reference to the global configuration object may be made available in the
lower layers of the hierarchy).
Figure 7.1 shows examples of these fields, objects, and component types. For example,
TOP_COMP is a fixed component that is physically located in the top level of the hierarchy,
and RefTOP_COMP components in SystemComp and ModuleComp components are both refer-
ence components that point at fixed component TOP_COMPo Also, source object GlobConf is a
configuration object that is physically located in the top level of the hierarchy, and RefGlob-
Conf objects in SystemComp and ModuleComp are both reference objects that point to source
object GlobConf in the top level of the hierarchy. Also, field HierFlag is a hierarchical configu-
ration field in component ModuleComp whose value should come form field GlobFlag in the
top level of the hierarchy.
The arrows that point from each layer of the hierarchy to the lower layers indicate the
required order of construction. For example, at the top level of the hierarchy, object Glob-
Conf, component TOP_COMP, and field GlobFlag control the content and structure of the hier-
archy rooted at component SystemComp, and must therefore be created before component
SystemComp is created. The relationships shown by arrows in figure 7.1 are specified using
the configuration construct of the OVM class library. The next section introduces the con-
structs provided by the OVM class library for creating and configuring a component hierar-
chy.
1640VM Component Hierarchy
F Source Object
I Fixed Component
In the OVM class library, class ovm_component provides the necessary facilities for building
a component hierarchy. Class ovm_threaded_component derived from ovm_component, in turn,
provides the necessary facilities for controlling the runtime behavior of these hierarchical
components (section 7.4). The predefined verification environment components of the OVM
class library (e.g., ovm_monitor, section 7.3) are derived from ovm_threaded_component. This
means that the features of classes ovm_component and ovm_threaded_component are available
to all predefined verification environment components of the OVM class library.
The OVM provides specific guidelines for using its predefined verification environment
components to build a verification environment hierarchy. A complete example showing this
flow is provided in section 13.6. This section, however. uses class ovm_component to illus-
trate the general steps for building and configuring a component hierarchy. The techniques
described in this section are then used with all classes derived from class ovm_component.
Hierarchy and Configuration Constructs 165
Global Methods:
function void set_config_int(string inst_name, string field_name, ovm_bitstream_t value);
function void seCconfig_object(slring inst_name, string field_name, ovm_object value, bit clone=1);
function void set_config_string(string Inst_name, string field_name, string value);
The steps for building the contents of a component are implemented in function bulidO
of that component. Configuration functions seCconfig_*O provide a name-based approach for
. setting fields within sub-components that are not yet created. An instance name Inst_name
and a field name field_name are specified with each configuration function. As the hierarchv
is created, configuration settings are applied if the instance name and a field name in th~
newly created component match one of the configuration settings specified before the com-
ponent was created. Configuration setting functions allow fields of integral type, string type,
and ovm_object type to be configured. Function seCconfi9_objectO additionally provides argu-
ment clone that indicates whether a field should be set to either a reference or a copy of the
argument specified with argument value of function set_confl9_objectO. Global functions cor-
responding to configuration functions of class ovm_component are provided in order to allow
configurations to be set at the top-most level of hierarchy where no component of type
ovm_component is yet created. The following comments apply to component configuration
usage:
o Configuration functions can manage only automated fields (section 6.4).
o Instance name and field name arguments (i.e., inst_name, field_name) may optionally
include wild-card characters "*,, and "?" allowing the same configuration setting to
be applied to different fields within the same component and/or to different compo-
nents across the sub-hierarchy.
o Base class ovm_component is defined in a way that configuration settings for a com-
ponent instance are applied as part of calling function buildO of that instance. As such,
using function bulJd() is mandatory for having the configuration settings for an
instance to take effect. A consequence of this implementation is that all configuration
settings for a component instance have been applied by the time its function bulJdO is
called, and therefore these settings can be used during the building of the sub-hierar-
chy rooted at that instance.
o The argument inst_name specified with each configuration function is assumed to be
relative to the hierarchical path of the component from which it is called.
o Configuration settings in a higher scope take precedence to the ones in a lower scope
by default. The reason for this ordering is to allow configuration settings at the higher
layers of the hierarchy to override those specified at the lower layers.
o Within the same scope, the last configuration setting matching a field takes prece-
dence over previous settings for the same field.
The use of configuration functions is illustrated by showing the construction of the
component hierarchy shown in figure 7.1. Even though the component hierarchy is built
recursively from top to bottom. the implementation of the program that creates this hierarchy
166 OVM Component Hierarchy
is started from the lowest level of the hierarchy. These steps are shown in the remainder of
this section.
The first step is to create the object and components at the lowest level of the hierarchy.
The content of components at the lowest level of hierarchy do not affect the build process
and as such are not shown in this implementation:
This program shows the implementation of object ex_config, and components ex_block
and ex_top_comp. It is assumed that contents of ex_top_comp may need to be accessed by
components at lower layers of the hierarchy. In addition, note that as required, these class
declarations include the appropriate automation macros (lines 2, 9, and 16) to allow these
classes to be handled by the OVM factory (section 6.3). Also, function newO must be defined
for all classes derived from ovm30mponent (lines 6, 13).
The next step is to define the class for module component ModuleComp:
r-----
!'rogram 7.2: ex_module class declaration, showing function build(}
1 : class ex_module extends ovm_component;
2 int Mod Data, LocalFlag;
3
string HierFlag;
4
ex_block BlockComp, CondBlockComp;
5
ex_config RefGlobConf, ModConf;
6
7 ex_top_comp ReITOP_COMP;
8
9 function new(string name, ovm_component parent):
10 super.new(name, parent);
11 endfunction
12
function void buildO:
13
14 Super.buildO;
15 $cast(BlockComp, create_component{"ex_block", "BlockComp"));
Blo~kComp.buildO;
16
17 If (HlerFlag == "must have CondBlockComp") begin
18 $cast(CondBlockComp, create_component("ex_block", "CondBlockComp"));
19 endCondBlockComp. build()'
,
20 endfunction
21
22 . ovm component t'l
~ovm ,...u I S_begin(ex module)
23 -field_1nt(ModData,OvM_DEFAULT)
------------------------------------------
Hierarchy and Configuration Constructs 167
24 .ovm_fieldJnt(LocaIFlag, OVM_DEFAULT)
25 'ovm_field_string(HierFlag,OVM_DEFAULT)
26 'ovm_componencutils_end
27 endclass
L ______________________ __
The implementation of class ex_module is shown in the above program. Note that no
configuration is passed from this module to lower level components (no arrows from this
module to lower level components in figure 7.1). Therefore, the implementation of function
bulidO for this class requires only that the appropriate components be constructed, and no
configuration settings are needed. It is assumed that any configurations applied to this com-
ponent by higher layer components have already taken effect before entering function bulldO.
The first step in function bulldO is to call the buildO method of the super class (line 13). Com-
ponent BlockComp is then created and built (lines 14, 15). Conditional component CondBlock·
Comp is then created only if hierarchical configuration field HlerFlag is appropriately set
(lines 16-19). Note that RefGlobConf, ModConf, and RefTOP_COMP are all set to their appropri-
ate values (set through configuration from higher layer components) by the time function
build() is called. This configuration mechanism is shown in the implementation of ex_system,
shown in the following program:
The implementation of the environment top level is shown in the following program:
"Program 7.4: Top level program for creating the component hierarchy
1 module top;
2 'include "ovm.svh"
3 'include "program-7.1"
4 'include "program-7.2"
5 'include "program-7.3"
6
7 string GlobFlag;
8 ex_system SystemComp;
9 ex_tap_camp TOP_COMP;
10 ex_config GlobConf;
11
12 initial begin
13 GlobFlag = "must have CondBlockComp";
14 $cast(GlobConf, ovm_factory::create_object("ex_config", "", "GlobConf'));
15 $cast(TOP_COMP,
16 ovm_factory::create_component("ex_top_comp", "", "TOP_COMP", null));
17 TOP_COMP.build();
18
19 set_config_string("SystemComp.ModuleComp", "HierFlag", GlobFlag);
20 set_config_object("SystemComp", "SysConf', GlobConf, 1);
21 se,-confi9_objecW''', "RefGlobConf', GlobConf, 0);
22 set_config_object("''', "RefTOP _COMP", TOP_COMP, 0);
23
24 $cast(SystemComp,
25 ovm_factory::create_component("ex_system", "", "SystemComp", null));
26 SystemComp.buildO;
27 ovm_prinUopologyO;
28 end
29 end module
~
Field GlobFlag, component TOP_COMP, and object GlobConf are assigned and created
first (lines 13-17) since they affect the sub-hierarchy rooted at component SystemComp. Note
that this program is in the scope of a module block, and as such, the globally defined
ovm_factory is used to create the appropriate components and objects (lines 14, 16). For the
same reason, global configuration functions are used to specify configuration settings for the
sub-hierarchy rooted at component SystemComp (lines 19-22). Function set30nfi9_stringO is
first used to indicate that the value of field HierFlag in instance SystemComp.ModuleComp
should be set to the value offield GlobFlag (line 19). Next, global function set_config_objectO
with clone argument set to 1, is used to indicate that component SysConf of SystemComp
should be set to a cloned copy of object GlobConf (line 20). Global function
seCconfiQ_objectO with clone argument set to 0, is then used to indicate that object RefGlob-
Conf and component RefTOP_COMP in any instance (as indicated by "*") should point to
object GlobConf and component TOP_COMP in the top level of hierarchy respectively (lines
21, 22). With these configuration settings in place, instance SystemComp is then created and
built by using the factory and calling function buildO (lines 24-26).
Verification Environment Component Types 169
Hardware
module verification components are implemented and used in verifying modules. In the next
step, system verification components are implemented to verify system level behavior.
The implementation view of an interface verification component is shown in figure 7.3.
This component contains the following:
• Driver
• Monitor
• Sequencer
I
I
I Agent
Bus Monitor
I --
r
1
Agent r-
Agent 8- DUV
I Monitor
~ ~
I
I
Sequencer
Il Driver
~
V
I
Environment
Component q MasterDriver ~ ~ Physical Interface
Layered Scoreboard or
Sequencer Transaction Monitor
• ~Ym_test
• oym_eny
• oYm_agent
• oYm_monitor
• oym_drlyer
• oYm_sequencer, oym_Yirtua'-sequencer
• oYm_scoreboard
ovm_vlrtuaLsequencer
tion and for handling simulation phases (section 7.4). The future release of the GVM class
library may add additional features to each component definition as the need for such fea-
tures are identified.
In the OVM class library, the concept of simulation phases is implemented as part ofthe
following classes:
• oYm_component
• oYm_threaded_component
• oym_eny
• oYm_test
An oym_component implements the notion of hierarchy and is used for building a hierar-
chical environment. Additionally, this class implements simulation phases that do not con-
sume simulation time (i.e., occur in zero time). An oYm_threaded_component is derived from
an OYm_component and adds a time consuming phase to the set of predefined phases. An
ovm_env is derived from oYm_threaded_component, and adds special utilities for controlling
the execution of individual simulation phases. All components in a verification environment
should be derived from ovm_env.
An oYm_test is derived from oYm_env, and models the top level component in the hierar-
chy. This top level component contains an instance of the verification environment hierarchy
customized to the requirements ofthe specific test to be run by this top level test component.
Using this approach, different tests running on the same verification environment hierarchy
are created as follows:
• Class test1 is derived from oYm_test, and contains an instance of the verification envi-
ronment hierarchy (an object of type oym_eny) initialized to the requirements of the
specific test to be carried out by test1.
• Class test2 is derived from ovm_test, and contains an instance ofthe verification envi-
ronment hierarchy (an object of type oym_eny) initialized to the requirements of the
specific test to be carried out by test2.
• Verification is started by calling global function run_test() with the name of the test
class that should be executed in the current simulation run (e.g., run_test(Utest1"1). A
test name specified using plus-argument OVM_TESTNAME overrides any name passed
to function ru"_testO (section 7.4.1).
Table 7.1 shows the predefined phases defined by the GVM class library. It is important
to note that functions build{) and resolye_all_bindings() are called implicitly as part of execut-
ing this flow.
Simulation Phase Control 175
Table 7.2 lists the predefined callback methods used for customizing the execution of
these phases to specific requirements of a verification environment components. Table 7.3
lists function and methods calls used for controlling the execution and termination of these
simulation phases.
run_testO. In this case, an instance of this class is created and this instance is used as
the top level component.
• Ifno argument is passed to function run_testO, and plus-argument OVM_TESTNAME is
not given, then each object of type ovm_test that is instantiated in the SystemVerilog
code is assumed to be a top level component.
• Simulation plus-argument +OVM_TESTNAME=<test_name> can be used to specify the
name of the test that is to be started for a given simulation run. Test name tesCname
specifies a class type derived from class ovm_test. This form of specifying a test name
has precedence over a test name that is passed as argument to function run_testO. This
behavior allows tests to be selected without having to recompile the SystemVeri log
code.
All top level components are processed concurrently. Given a top level component,
methods for each phase are executed using a depth-first postorder traversal, where the phase
method of a component is called after the phase method for its child components (e.g., func-
tion preJun() of a component is called after function pre_runO of its child component). The
exception to this behavior is that for the run phase, task runO of each component is started
using a depth-first preorder traversal where task runO of a component is started before the
task runO of its child components.
phase of a single component, the hierarchy rooted below a component, or both (see
argument options in table 7.3). The default behavior is that task run() of all compo-
nents affected by calling this task are terminated immediately.
• Field enable_stop_lnterrupt and virtual task stop() of class ovm_component allow the
default stop behavior to be modified. This modification mechanism, described later
in this section using an example, allows any component to raise an objection to, and
therefore prevent, the ending of the run phase for this component, and therefore, the
hierarchy in which it exists.
• Function set_global_stop_timeout() of class ovm_component can be used to specify a
timeout value for the maximum length oftime a component may hold an objection to
ending the run phase. The default value for this field is 10,000 time units.
• Task global_stop_request() of class ovm_component can be used to call stop_request()
for all active top level components. .
• The global run phase completes when the run phase of all top level components have
completed,
The mechanisms described in this section can be applied to any component types
derived from ovm_component (e.g., ovm_monitor, ovm_env, ovm_test, etc.)
The following program shows an example of redefining simulation phase callback
methods to control the completion of the run phase:
This program shows the implementation of class top_sve, the top level component con-
taining the verification environment hierarchy. The hierarchy rooted at this component does
not relate to the focus of this example and is not shown. The implementation of this class
178 OVM Component Hierarchy
shows the use of predefined field enable_stop_interrupt of class ovm_component for prevent-
ing the run phase of this class from being stopped by a parent component. This approach is
based on the behavior that if field enable_stop_interrupt is larger than zero, then a call to task
stop() of this class must be completed before the run phase of this class can be terminated. In
this implementation user defined functions raise_objectionO and lower_objectionO are defined
to raise and lower the value offield enable_stop_interrupt (lines 8, 9). Before the run phase is
started, function raise_objection() is called in function preJun() (line 12). This means that as
long as this objection remains raised and the stop timeout value, if provided, has not expired,
then the run phase of this component will not end. Function stopO is also defined so that this
function completes only when the value of field enable_stop_interrupt is set to 0 (line 16).
Function extract() of this class is also defined to print the time at which this phase starts,
thereby giving the simulation time at which the run phase completes (lines 19-22). Finally,
task runO is defined to call function lower_objection() after some simulation time has passed.
The use of this class in a test is shown in the following example:
The above example shows the implementation oftestcase my-test which is derived from
class ovm_test (line 5). This class instantiates class top_sve (line 7) which is configured and
built in function buildO (line 13-17). Task runO of this class is defined to stop all running
components anywhere in the environment by immediately calling global_stop_requestO (line
20). The use of this function is in contrast with function stopJequestO which only controls
the tenninatiol1 of the run phase for components in the hierarchy of its calling component
(line 22). The use of global_stop_requestO is recommended over stopJequest() unless there is
specific requirement for stopping the run phase for only a portion of the verification environ-
ment hierarchy. Global function run_testO is used to run testcase my-test (line 26).
Simulation Phase Control 179
Effective creation of verification scenarios is perhaps the most important part of carrying out
a verification project. After all, everything in a verification environment is put in place to
support the ultimate goal of creating and monitoring all of the scenarios that must be veri-
fied.
In modem verification environments, verification scenarios are modeled as a sequence
of items where both the sequence items and the sequence itself can be randomly or determin-
istically defined and created at simulation runtime. This approach brings the power of ran-
domization to scenario generation, and allows for fine-grain runtime control of how
sequences are built. The ability to define a sequence item at any level of abstraction (i.e.,
from low-level driving of signals to high level issuing a command to a verification compo-
nent) allows this approach to be applicable at any level of abstraction. This approach also
facilitates the creation of hierarchies of sequences where complex sequences are created by
combining lower level sequences.
Effective construction of verification scenarios as sequences of items requires a mix of
language constructs and an infrastructure for sequence generation and driving. A sequence
generation infrastructure should provide the ability to define:
• Flat sequences (series of sequence items driven by one driver)
• Hierarchical sequences (sequence of sequences driven by one driver)
• Virtual sequences (sequence of sequences driven by multiple drivers)
• Layered sequences (sequences driving other sequences)
• Reactive sequences (ability for a sequence to react to its environment)
This chapter provides a detailed description of sequence generation capabilities pro-
vided by the OVM class library. This chapter also provides small implementation examples
to better illustrate the concepts discussed in this chapter. A comprehensive and detailed
example of sequence generation is provided in chapter 14.
182 OVM Transaction Sequences
0000
Sequence Item Interface
A sequencer produces a single stream of sequence items where all of the sequence items
have the same type. The ability of a sequencer to produce only a single stream of items limits
this configuration to single-sided scenarios. As will be shown later in this chapter, a virtual
sequencer is used to remove this limitation and allow multi-sided scenarios to be handled by
the sequence generation facilities of the OVM class library.
A driver is the consumer of sequence items produced by a sequencer. A driver may
drive the sequence items directly into a DUV interface (figure 8.1). A driver may also repre-
sent the sequencer in a lower layer of the verification scenario generation hierarchy. For
example, a TCP/IP layer sequencer, producing items representing TCP/IP layer traffic, may
pass its items to an Ethernet sequence that uses a single TCP/IP layer item to generate a
sequence of Ethernet layer items.
A sequence item interface is used to connect a sequencer to a driver. A sequence item
interface is a transaction interface (chapter 9) that is customized to the requirements of
sequence generation architecture (i.e., it supports a customized set of methods instead of the
standard transaction interface methods listed in section 9.3).
The interaction between a sequencer and a driver can be one of:
• Push mode
• Pull mode
In push mode, a sequencer drives a produced item into a driver when that item is gener-
ated, and waits until the driver consumes this item. In pull mode, a driver requests the
sequencer to provide it with a sequence item. The pull mode of interaction is superior to push
mode. The reason is that first, in pull mode, a sequence item is consumed immediately after
it leaves the sequencer. This means that the sequencer can customize the contents of the
sequence item to the timing of sequence item consumption. Second, the single stream of
sequence items leaving a sequencer may represent multiple concurrently running scenarios,
and pull mode allows the sequencer to arbitrate between items generated by these concur-
rently running scenarios based on the item that is best suited for consumption at the time tlle
driver requests the next item (see is_relevant() in section 8.8.2). In addition, a pull mode
implementation can easily be turned into a push mode implementation by adding an active
transaction channel between the sequencer and the driver, containing a thread of execution
that reads the next available sequence item from the sequencer, passes it to the driver, and
blocks until this sequence item is consumed by the driver. The OVM class library fully sup-
ports the pull mode of operation, with support of push mode planned for future versions.
The configuration in figure 8.1 shows a pull mode interaction between the sequencer
and the driver. In this mode, the driver is the initiator component and the transaction con-
sumer, and the sequencer is the target component and the transaction producer. The sequence
item interface defined in OYM supports a bidirectional transaction transfer across this inter-
face where the driver requests the next sequence item from the sequencer and then returns
the result of executing that item through the interface.
The remainder of this chapter describes the sequence generation utilities of the OYM
class library.
184 OVM Transaction Sequences
8.2 Sequencers
The architecture of a sequencer component is shown in figure 8.2. This architecture is identi-
fied by:
• A default sequence item type
• A library of predefined and user defined sequences
• A set of running sequences
• An arbiter
• A sequence item interface
A sequencer produces a single stream of sequence items whose base type is given by the
sequencer's default sequence item type. A sequencer contains a sequence library which is a
container of predefined and user defined sequences that can either be started as running
sequences each having an independent thread of execution, or used as subsequences in hier-
archical sequences. Each running sequence generates a stream of sequence items, feeding
these items into an arbiter that uses a first-in-first-out (FIFO) policy to select among relevant
items from the incoming streams. The OVM class library provides a mechanism for allowing
each sequence to define whether or not the items it generates are relevant to the current veri-
ficationcontext (see is_relevant() in section 8.8.2). The driver connects with the sequencer
through a sequence item interface that forwards driver requests for a new item to the arbiter.
Sequencer Driver
~ Starting a sequence
The implementation of a sequencer using the OYM class library is described through
the following steps:
• Sequence item definition
• Sequencer and the default sequence item declaration
Sequencers
-
185
-----------------------------------------------------------------------
• Defining flat and hierarchical sequences
• Sequence library and the predefined sequences it contains by default
• Sequence activation
• Arbitration mechanisms
• Sequence item interface
These topics are discussed in the following subsections.
The implementation of a sequence item using the OVM class library is shown in the fol-
lowing example (see section 12.2 for XBar design specification):
The above example highlights the following general guidelines that must be followed
when implementing these elements of a sequence generation facility:
• Sequence item classes must be derived from class ovrn_sequence_item (line 1).
• The declaration of a new sequence item must include the declaration of its construc-
tor (lines 6-9).
• Macro ovm_object_utilsO must be used in order to allow a newly defined sequence
item to be managed by the OVM factory. Field automation macros can also be speci-
fied by using the begin/end variation of this macro (lines 3-5).
] 86 OVM Transaction Sequences
The above example highlights the following general guidelines that must be followed
when implementing a sequencer:
• A sequencer class must be derived from ovm_sequencer (line 1).
• Macro ovm_sequencer_utilsO or its begin/end variation must be used with a new
sequencer declaration (lines 3-5) to initialize its sequencer utilities. The begin/end
variation of this macro is used when the newly defined class contains fields that must
be automated. The argument to this macro is the name of the newly declared class in
which the macro is used.
• The declaration of a new sequencer must include the declaration of its constructor
(lines 7-10). Additionally, macro ovm_update_sequence_lib_and_item() must be used
in this constructor to define the default sequence item handled by this sequencer (line
9). The argument to this macro is the name of the default sequence item to be handled
by this newly declared sequencer. This default sequence item also gives the sequence
item type generated by sequence ovrn_simple_sequence (section 8.5.1).
The above example creates a minimal sequencer that produces a stream of sequence
items of type xbar_xmt_packet, and contains a sequence library with a set of predefined
sequences. The addition of user defined sequences to this sequence library is shown in the
next section. A predefined sequence of this sequencer is started by default, producing a
stream of randomized items of base type xbar_xmt_packet (section 8.5.1). This means that this
sequencer, as implemented, can be used for initial verification steps. Creating more interest-
ing scenarios, however, requires that new sequences be added to the sequence library and
started. This step is described in the next section.
Sequences 187
8.3 Sequences
A sequence contains a recipe for generating an ordered list of sequence item instances. A
sequence declaration describes these instructions and may contain randomizable fields and
fields that get customized to the specific conditions when the sequence is created. A
sequence declaration defines a new sequence we. A sequence instance refers to a specific
instance of sequence type that generates a sequence of item instances during the simulation
runtime. Multiple sequence instances may be created from a single sequence type, each
behaving differently because of their randomizable fields and other fields initialized to the
specific conditions when that sequence instance was created.
A flat sequence is defined only in terms of sequence items. A flat sequence contains the
following types of information:
• Sequence item(s) to generate
• Dynamic constraints 1 that should be applied to each item during generation
• Randomization control (e.g., use of functions constrainCmode() and rand_mode() to
modify sequence item static constraints)
• Flow control information (e.g., timing)
• How environment conditions affect item generation (reactive sequences)
• Relationship between consecutively generated items
A sequence declaration is identified by:
• Sequence items and/or subsequences
• Action block
Sequence action block describes how and in what order items and subsequences are
generated. A graphical view of this process is shown in figure 8.3. In this example, sequence
51 contains three sequence items A, B, and c. The action block for this sequence indicates
that item A should be generated, followed by item B, followed by item A, and followed by
item c. The generation of sequence 51 produces a sequence of items A, B, A, and c
which are passed to the sequencer in the order of their generation.
The implementation of a flat sequence using the OVM class library is shown in the fol-
lowing example:
I. Static constraints are constraints that are SPecified as part of an item declaration while dynamic con-
straints are in-line constraints proyided when an item instance is heing randomized.
188 OVM Transaction Sequences
do Actions:
Sequence Iten1S: --_.JI!!!!iIIIIIIIlll'illlllj1llil1I
what to generate how to generate each sequence item
Sequence S1
jgj]-- '---Action block:
order of generating sequence items
S1 'l E!>-9 A- - - - - - - -
'~ElB""""""
c A B A
000 0
Arbiter
.----- - 9 A --------
--------- 9 8 -------- ..
Time.
13
14 virtual task bod yO:
15 xbar_xmt_packet this_seCLitem;
16 'ovm_do_with(this_seqJtem, (payload==default_payload;})
17 repeat (count)
18 'ovm_do(this_seqJtem)
19 endtask
20 endclass: xbar_xmt_seq_flat
Flat sequence xbar_xmt_seq_flat is declared in the above example. This class is derived
from the predefined base class ovm_sequence (line I). Macro ovm_sequence_utils() is used to
register this class with the factory and to add it to the sequence library for sequencer
xbar_xmt_sequencer implemented in program 8.2 (line 6). This sequencer is defined to con-
tain a random field defaulCpayload, and a random field count for controlling the number of
generated items (lines 2, 3). A constraint defining the desired range for this payload is also
included in this description (line 4). As will be shown in program 8.4, fields defaulCpayload
and count can be used to customize this sequence when it is used as a subsequence in a hier-
archical sequence.
The action block for sequence xbar_xmCseq_flat is specified by defining virtual task
body() and using execution macros ovm_do_with() and ovm_do() to execute sequence item
thls_seq_item (lines 14-19). Macro ovm_do_withO is used to execute sequence item
this_seq_ltem with dynamic constraints, constraining field payload to field default_payload of
the sequence (line 16). Macro ovm_do() is used to execute sequence item thls_seq_item with
its static randomization constraints (line 18). Every execution of this sequence will produce
count+1 sequence items. The OYM class library detines a set of execution macros that are
used for executing sequences and sequence items (section 8.6 ..2).
Hierarchical Sequences 189
It should be noted that depending on the execution macro, a new instance of sequence
item this_seq_item may be created and randomized for each execution of this sequence item
(lines 16, 18). Table 8.1 provides a summary of actions perfonned by each execution macro.
As such, the definition of a sequence should not depend on the content of a sequence item
that is created or randomized by an execution macro. Given that both execution macros
ovm_do() and ovm_do_with() create and randomize the sequence item, the implementation
shown above follows this guideline by defining a new field holding the default payload value
and constraining sequence item this_seq_item when it is passed to an execution macro.
The above example highlights the following general guidelines that must be followed
when implementing a flat sequence:
• A new sequence must be derived from ovm_sequence (line 1).
• The declaration of a new sequence must include the declaration of its constructor
(lines 10-12).
• Macro ovm_sequence_utils(), or its begin/end variations must be used to register a new
sequence type with the sequencer whose sequence library should contain this newly
declared sequence (lines 6-8). This macro also registers this new sequence type with
the OVM factory so that override operations can be supported. The begin/end varia-
tion of this macro is used when the newly defined class contains fields that must be
automated. The arguments to this macro are the name of the class in which this macro
is used, and the sequencer class whose sequence library should contain this newly
declared sequence.
• A sequence action block is specified by defining the contents of virtual task bodyO
(lines 14-19).
• In some cases, a new sequence is only defined to be used later as a base class for
other sequences, and hence, should not be placed in the sequence library of its
sequencer. For such sequences, macro ovm_object_utilsO or its begin/end variations
can be used instead of macro ovm_sequence_utiisO. In this case, the new sequence is
registered with the OVM factory without being placed into a sequence library.
The flat sequence created in the above example is automatically added to the sequence
library for sequencer xbar_xmCsequencer and can be used in hierarchical sequences belong-
ing to that sequencer, or started as the root sequence of a new running sequence, or set as the
default sequence for this sequencer. The use of this sequence in a hierarchical sequence is
shown in the next section. Starting this sequence as a root sequence is shown in section 8.6.
A hierarchical sequence is defined in terms of both subsequences and sequence items. Hier-
archical sequences allow previously defined sequences to be reused, and also provide a
means of organizing a long sequence into smaller interelated sequences. Consider a verifica-
tion sequence generating 20-30 SIZED Ethernet packets followed by a PAUSE packet fol-
lowed by 40-50 QTAGGED packets. It may be helpful to model this sequence as a hierarchical
sequence where the generation of SIZED and QTAGGED packets are defined as flat sequences
that are then used in the final hierarchical sequence.
1'90 OVM Transaction Sequences
Sequence S3
F
S3 CGn----l> 0 F- - - - - - - - - - - - - - - - - - - - - - - - -
~@-S:2---t>@-E- - - - - - - - - - - - - - - -- E
A
~ @£,---I3>-0 A - - - - - - -- ....
E
- - - - - -~~0B- - -- -- -- B
~
- -- -- - \-.~ 0 A - - - - - - - - A
ts
u
C-
---------- 08-------- ~
<l)
::J
- - - - - - - - - @D - - - - - - - - - - - - - - - - D cr'
<l)
[/J
Time
example a sequence 51 can execute a sequence 52 in its execution thread as a root sequence
thereby denying 52 any privileges granted to 51 (e.g., interaction with a sequencer currently
grabbed by 51), and wait for 52 to complete before continuing. Alternatively, a sequence 51
can execute sequences 52 and 53 as parallel subsequences (i.e., by using a fork statement), so
that these subsequences inherit the privileges currently granted to sequence 51'
Section 8.6 describes how root sequences and subsequences are started. The remainder
ofthis section shows the implementation of hierarchical sequences using the predefined exe-
cution macros ofthe DVM class library.
The implementation of a hierarchical sequence using the DVM class library is shown in
the following example:
This example shows how flat sequence xbar_xmCseq_flat (defined in program 8.3) is
used in the declaration of hierarchical sequence xbar_xmt_seq_hier. This new sequence is
derived from the predefined base class ovm_sequence (line I), and is added to the sequence
library for sequencer xbar_xmCsequencer by using macro ovm_sequence_utils() (line 2). The
action block for this sequence is specified by defining virtual task body() and using macros
ovm_do() and ovm_do_withO to start the execution of sequence item thls_seq_item (line 12) fol-
lowed by execution of subsequence this_flat_seq once without any constraints (line 13) and
once with constraints applied to randomized fields of sequence this_flat_seq (line 14). The
use of macro ovm_do_withO with a sequence argument (line 14) shows how each execution of
a subsequence in a hierarchical sequence can be customized to its execution context.
The guidelines for creating a hierarchical sequence are similar to those for a flat
sequence, except that a hierarchical sequence contains instances of previously defined
sequences (line 9) and can optionally execute sequence items. A hierarchical sequence may
contain instances of both flat and other hierarchical sequences. It should be noted that all
sequence instances used in a hierarchical sequence must belong to the same sequence library,
and hence the same sequencer.
1'2 OVM Transaction Sequences
Array sequences in class ovm_sequencer holds all sequences in the sequence library of
this sequencer. The total number of sequences in the sequence library of a sequencer is given
by the size of this field. Function geCseq_kind() returns the kind for sequence type name
seq_type~name. Function geCsequence() creates a new instance of a sequence of kind
se~klnd. The resulting sequence instance is not randomized. These functions are defined for
both a sequencer and a sequence. Calling these functions from inside a sequence has the
effect of calling the same function in the sequencer that contains that sequence.
The fields and methods described above allow access to any sequence from inside a
sequencer or a sequence. The following sections show how a sequence created through this
interface can be started.
The fields shown above can be changed through field configuration constructs (section
7.2) to customize a sequencer to behave according to the requirements of its environment.
Sequences can be executed using either methods calls or through predefined macros.
Sequence items are executed by calling predefined macros. Executing root sequences and
subsequences using methods is described in section 8.6.1. Executing subsequences and
sequence items using the predefined macros of the OYM class library is described in section
8.6.2.
using this task. Other than this special use model, the use of execution macros (section 8.6.2)
is recommended over this task.
The following example program highlights two aspects of sequencer implementation:
Overriding the default behavior of a sequencer, and starting root sequences and subse-
quences. This implementation uses sequence library methods (section 8.5) and sequence start
methods shown above.
The first step in modifying the default behavior of a sequencer is to define a new
sequence that will replace the default sequence started by the sequencer and registering it
with the sequencer, Sequence xbar_xmt_main is defined in the abo\'e example (lines 7-35),
Configuration method set_config_strlngO is used to change the value of the field
defaulCsequence in sequencer sqnsr to the name of this newly defined sequence (line 39).
The OVM factory is then used to create an instance of the sequencer (line 40), and the buildO
Executing Sequences and Sequence Items 195
method of this new instance is used to create its content (line 42). Sequencer sqnsr will now
start sequence xbar_xmCmain when it enters the run phase of simulation.
The action block for sequencer xbar_xmt_maln (lines 14-34) shows the start of two new
root sequences (lines 19-31) and executing of a subsequence (line 33). An instance of
sequence xbar_xmt_seq_flat is started as a root sequence by first getting the key index for the
sequence type (line 21), creating a new instance of this key (line 22), randomizing the con-
tents of this sequence according to the local requirements (line 23), and then starting this
sequence instance by calling function start_sequence() of its parent sequencer accessed
through predefined field p_sequencer (line 26). The alternative approach of using task startO
of the sequence is used to start an instance of sequence xbar_xmt_seq_hler (lines 27-29) with-
out the randomization step, since this sequence does not contain any field requiring random-
ization. Task do_sequence_klnd() of class ovm_sequence is then used to start a subsequence
(line 33).
The use of macros to execute subsequences and sequence items is described in the next
section.
The execution flow ofa subsequence (e.g., thls_flaCseq on lines 13 in program 8.4) con-
sists of the following steps:
• Create and initialize the subsequence
• Execute pre_do() method of the calling sequence
• Randomize the subsequence
• Call mid_do() method of the calling sequence
• Call method body() of the subsequence
• Call method post_do() ofthe calling sequence
Execution steps for a subsequence are similar to those for a sequence item. The main
difference is that executing a subsequence does not involve any synchronization with the
driver. Also a subsequence has an action block defined by the contents of task body() which is
called after calling task mld_do().
&:
.-==
"
.!:i ~
it Ii .
e= 0
"1:1
Macros .f. .
:l
&:
~
i"CI
e "CI=
l
.t:I
Ii
~&:
.. 0
= ! .. =
"CI
&:
1:1. &:
f
i!&: =
Ii 1:1. =
1:1.
&:
.~
.,
=
-;:
.=
&: <
Table 8.1 lists the predefined macros provided by the OVM class library for executing
sequence items and Subsequences. Each of the macros listed in table 8.1 cover different steps
of execution, therefore allowing these macros to be combined to achieve a number of differ-
ent randomization behaviors. Some possible scenarios include:
• Execute an item with late randomization (section 8.2.1) and with item's default con-
straints:
• Use macro ovm_do() to execute the item
• Execute an item with early randomization (section 8.2.1) and with item's default con-
Executing Sequences and Sequence Items 197
straints:
• Use macro ovm_create() to create the item. Then explicitly randomize the item.
Then use ovm_sendO to send the item. This macro does not do any further ran-
domization and, therefore, the item is sent with the early randomization that is
explicitly performed.
• Execute an item with late randomization and with constraints in addition to the item's
default constraints:
• Use macro ovm_do_withO to execute the item
• Execute an item with late randomization and with modification of its default random-
ization constraints:
• Use ovm_createO to create the item. Explicitly tum off rand statements or con-
straint blocks in the item using randomization methods. Use ovm_rand_sendO or
ovm_rand_send_with() to send the item.
The OVM class library defines a set of callback methods for a sequence object. These
methods can be used to customize the behavior of sequence generation steps. For example,
task mid_do() can be redefined to over-write the result of randomization for some fields of a
subsequence before the action block for that subsequence is executed. Table 8.2 summarizes
the list of callback methods for a sequence object.
Method
Return Name Arguments Notes
Type
task pre_body 0 Called only for root sequences
Called for both root sequences
task body 0 and subsequences
task post_body () Called only for root sequences
Called when executing both
task pre_do (bit is_item)
items and subsequences
Called when executing both
function void mid_do (ovm_sequencejtem this_item)
items and subsequences
Called when executing both
function void post_do (ovm_sequencejtem this_item)
items and subsequences
It is important to note that if a sequence executes multiple sequence items and subse-
quences, then the same hook methods are called when executing any of these items or
sequences (e.g., the same mid_do!) method is called for every execution of sequence item
and/or subsequence). As such, use of hook methods in sequences that execute multiple sub-
sequences and/or sequence items requires special facilities for making clear which sequence
item or subsequence is being executed when that hook method is called.
Figure 8.5 shows an example of the order in which these callback methods are called
during the execution of a root sequence, subsequences, and sequence items. Note that the
flow in this figure shows only the order of calling these callback methods and does not show
the full details of the steps that are carried out during the execution of sequence items and
sequences. Method names shown in this example directly correspond to steps shown in table
8.1.
198 OVM Transaction Sequences
j
in start sequence()
rs.pre_bodyO
I/exec ssA
-rs.pre_do(O)
-rs.mid_do(ssA)
J in ssA.body()
I/exec is
-ssA.pre_do(1)
'-----< rs.bodYO-k
t -ssA.bodyO --+-<
t -ssA.mid_do(iB)
h
rs.post_bodyO -rs.posCdo(ssA)
-ssA.post_do(iB)
1_
0000
~---------+-. ovm_seqJtem_prodJf
Sequence Item Interface
In using a sequence item interface, it is assumed that the driver is an initiator component
that is a transaction consumer, and the sequencer is a target component that is a transaction
producer. The sequence item interface consists of two connector object kinds:
• Imp object kind ovm_seq_item_cons_if
• Port object kind ovm_seq_item_prod_if
Connector object kind ovm_seq_item_cons_if is instantiated inside the producer compo-
nent (i.e., ovm_sequencer) and connector object kind ovm_seq_item_prod_if is instantiated
inside the consumer component (i.e., ovm_driver). Connector object ovm_seq_item_consJf
Sequencer Arbitration Mechanism 199
provides the implementation for the predefined methods supported by this interface. Once
these connector objects are connected, these supported methods can be called from inside the
driver to exchange sequence items with the sequencer. Table 8.3 lists the methods supported
by a sequence item interface.
Grabbing refers to the ability of a sequence to grab a sequencer, thereby gaining exclu-
siye access to the arbitration mechanism. A grabbed sequencer selects sequence items from
only the sequence that has currently grabbed the sequencer or any of its subsequences. Since
the sequencer does not select sequence items from any other sequence, all other sequences
Sequencer Arbitration Mechanism 201
Execution Flow
Sequencer Sequence
Arbiter Loop
-wait for driver to request an item
-if no item available
wait for event new_item_added
c= Execute-item
:~
-place new item in queue
-emit event sqnsr.new_item_added
-pre_doO
-wait for event do_gen
~ ,.
Figure 8.7 Sequence and Sequencer Interaction
appear to block until the sequencer is ungrabbed. The OYM class library provides the fol-
lowing methods for allowing sequences to grab the sequencer:
Methods defined fro class ovm_sequencer:
task grab(ovm_sequence_seq seq);
function void ungrab(ovm_sequence_seq seq);
function ovm_sequence current_grabber();
function bit is_grabbedO;
Methods defined fro class ovm_sequence:
function bit is_blocked();
Task grabO is used to grab a sequencer. If the sequencer is already grabbed, then this
task blocks until the sequence that has already grabbed this sequencer calls function ungrabO.
Function currenCgrabberO returns a pointer to the sequence that is currently grabbing the
sequencer. Function is_grabbedO returns 1 if the sequencer is currently grabbed. A call to
grabO in a subsequence of a sequence that has already grabbed the sequencer does not block
202 OVM Transaction Sequences
Examples of grabbing sequences and relevance calculation are given in section 14.10.
8.9
.....
Virtual
~"- ...
Sequencers
- -....... ... --".-.-
"".", ..' - ' " .... .
The OVM class library provides the virtual sequencer construct for creating multi-sided ver-
ification scenarios. Figure 8.8 shows a graphical view of a virtual sequencer and its interac-
tion with downstream sequencers. A virtual sequencer uses seguence interfaces to interact
with downstream sequencers. A sequence interface allows a virtual sequence to execute sub-
sequences belonging to the sequence library of downstream sequencers. A sequence inter-
face also allows a virtual sequence to interact with (e.g., grab) downstream sequencers.
Virtual sequences can only execute subsequences. As such, a virtual sequencer does not have
a default sequence item type. A subsequence executed by a virtual sequence may belong to
the sequence library of the local virtual sequencer or any downstream sequencer connected
to the local sequencer through a sequence interface.
The implementation of a virtual sequencer is shown in the following program:
I
Program 8.6: Virtual sequencer implementation
1 class xbar_xmt_virtual_sequencer extends ovm_virtual_sequencer;
2 int unsigned port_num;
3 'ovm_sequencer_utils_begin(xbar_xmCsequencer)
4 'ovm_fieldjnt(port_num,OVM_ALL_ON)
5 'ovm_sequencer_utils_end
6
7 function new (string name='''', ovm_component parent=null);
Sequence Interfaces 203
8 super.new(name, parent):
9 'ovm_update_sequence_lib
10 endfunction: new
11 endclass: xbarjmt_virtual_sequencer
Sequencer
downstream sequencers
can be either a real or a
virtual sequencer
The use of virtual sequencers for generating multi-sided scenarios is shown using an
example in section 14.7.
The OVM class library provides a sequence interface construct for connecting a virtual
sequencer with a downstream sequencer. A sequence interface is a specialized interface that
supports methods customized to the requirements of sequence exchange (figure 8.9).
In using a sequence interface, it is assumed that the virtual sequencer is the initiator
component, and the downstream sequencer is a target component. The interface consists of
two connector object kinds:
• Port object kind ovm_se<Lcons_if
• Imp object kind ovm_seq_prod_if
204 OVM Transaction Sequences
Sequence Interface
~----------------------------
Port connector object kind ovm_seq_cons_if is instantiated inside the producer compo-
nent (i.e., ovm_virtual_sequencer) and imp connector object kind ovm_seq_prod_if is instanti-
ated inside the consumer component (i.e., downstream ovm_sequencer or
ovm_virtual_sequencer). Once these connector objects are connected, these supported meth-
ods can be called from inside the virtual sequencer to execute it on the downstream
sequencer. Table 8.3 lists the methods supported by a sequence interface.
The implementation of a virtual sequencer, as provided by the OYM class library, con-
tains an associative array of connector objects of kind ovm_seq_cons_if. The following field
and method are provided for class ovm_virtual_sequencer:
Method and field defined fro class ovm virtual sequencer:
ovm_seq_cons_if seq_cons_if[stiingl: -
virtual function void add_seq_consjf{string iCname):
Execution Steps
c
~. ~
~
0 'il!"c "ec
.~
Macros .t
~
0
c
"1t E
c
~ .:.c ~
"cf
;1 ...c "...1
... c:o. "f ].9 E isc I
~., .£
c <
...c
ovm_do_seq(subseq, se'Lconsjf) IlQ IlQ IlQ IlQ IlQ Ii:]
ovm_do_se'Lwith(subseq, se'Lconsjf) ~ ~ ~ 0 ~ ~
ovm_create_seq(subseq, seCLcons_if) ~
ovm_send( subseq) ~ ~ ~ ~
ovm_rand_send(subseq) ~ ~ ~ ~ ~
ovm_rand_send_with(subseq, {constraints}) ~ 0 ~ 0 0
Table 8.5: Subsequence Execution Macros for Virtual Sequencers
Full examples of creating, connecting, and driving sequences through virtual sequenc-
ers are given in section 14.7.
206 OVM Transaction Sequences
CHAPTER 9 oVM Transaction
Interfaces
Producer [} ~ Consumer
D
LJ
Initiator connector object
P\W:tqdU~~r;'iJ
!:CQh.sum"Sj;!
._ .. ,..;,
.;~: ~J~v,~,'~·::l ~
target. Transaction control and data flows do not have to be in the same direction. The trans-
action-based communication model shown in figure 9.1 allows either the producer or the
consumer to be the transaction initiator. In this case, if the producer is the initiator, then the
consumer is the target, and the producer puts a transaction into the consumer. If the consumer
is the initiator, then the producer is the target, and the consumer requests that the producer
give it a new transaction.
Concepts of push mode and pull mode (section 8.1) are related to data and control flow
directions. In a push mode interaction, the transaction initiator is also the transaction pro-
ducer. In a pull mode interaction, the transaction initiator is the transaction consumer.
Transaction data flow can be bidirectional. In this type of connection, the transaction
initiator can be both the producer and consumer of transactions. For example, a transaction
initiator may produce a transaction, pass this transaction to the target for consumption, and
then expect the target to reply back with a reply transaction. In a unidirectional transaction
interface, transactions move in only one direction. In a bidirectional transaction interface,
transactions move in both directions over the same transaction interface.
Transaction movement in a class-based environment is achieved through transaction
connector objects and method calls. In this approach, the initiator component contains a con-
nector object that provides it with a set of predefined methods. While developing the initiator
component, these methods are used as needed, with the understanding that the actual imple-
mentation of these methods will be provided by the target component that the initiator will
eventually get connected to. At the same time, a target component contains a connector
object that must provide the implementation of methods that are assumed to be available by
any initiator component connecting to this target component. Transaction connector object
killii. refers to the different kinds of connector objects that are used in the initiator and target
components, and for routing transaction interfaces through layers of component hierarchy
(section 9.1). Transaction connector object type is defined by the set of methods that is sup-
ported by a connector object.
Tr~ns~ction connector objects are linked when the component hierarchy is created.
Once thiS hnk is in place. calling a preddined method of the connector object in the initiator
component. has the effect of calling the implementation of that method in the target compo-
nent. A major advantage of a communication infrastructure implemented using this approach
Transaction Connector Objects 209
is that both the initiator and target components can be implemented independently, and with-
out requiring any special knowledge about the internal implementation of the other compo_
nent.
A common use of a transaction-based connection is where multiple consumer compo_
nents initiate requests to a common producer component asking for a transaction. This means
that transaction interfaces must support connections between multiple transaction consumers
that are also the initiator with a single transaction producer which is also the transaction tar-
get. Another common use of a transaction-based connection is where a transaction producer
broadcasts its next transaction to multiple target components that each receive a copy of the
broadcasted transaction. A classic example of this use model is where multiple verification
components (e.g., scoreboard, coverage collector, etc.) receive packets from a monitor com-
ponent. An analysis interrace defines a special type of transaction interface where an initia-
tor component that is the producer of transactions can broadcast its transactions to multiple
subscribing target components. Transaction-based connectivity does not support the case
where a single initiator component that is a transaction consumer is connected to multiple
target components that are transaction producers. The reason is that transaction-based con-
nectivity does not support any arbitration mechanism for determining which producer should
supply the transaction requested by the consumer.
The GVM class library provides the following transaction interface types:
• Unidirectional transaction interfaces
• Bidirectional transaction interfaces
• Analysis interfaces
• Transaction channels
The implementation of unidirectional and bidirectional interfaces are described in sec-
tions 9.3 and 9.4, respectively. Analysis interfaces provide a specialized transaction-based
mechanism for allowing multiple target components that are transaction consumers to sub-
scribe to a single initiator component that is a transaction producer. Analysis interfaces are
described in section 9.5. Transaction channels provide specialized class objects that contain
the implementation of the predefined methods required by transaction connector objects,
thereby simplifying both the modeling and the implementation of transaction interfaces.
Transaction channels are described in section 9.7.
The OVM class library defines three connector object kinds: imp objects, port objects,
and export objects. Imp objects are used inside target components to provide the implementa-
tion of the predefined methods that are supported by a transaction interface (e.g., i o, i1 in fig-
ure 9.2). Port objects are used inside initiator components to provide access to the predefined
methods of the transaction interface (e.g., Po, Ph P2, Ps in figure 9.2). Port objects are also
used to route transaction interfaces to the higher layers of the hierarchy (e.g., P3, P4 in figure
9.2). EXQort objects are used to route transaction interfaces to the other layers of the hierar-
chy (e.g., eo in figure 9.2).
D Port Object
D Export Object
LJ Imp Object
The OVM class library defines the following unidirectional connector objects:
ovm_UNIOIR_TYPE_imp #(trans_type, imp_parenUype)
ovm_UNIOIR_TYPE_port #(trans_type)
ovm_UNIOIR_TYPE_export #(trans_type)
In the above description, UNIDlR_TYPE refers to one of the unidirectional interface types
supported by the library (section 9.3). Parameter trans_type identifies the class type of the
transaction moving across the interface, and imp_parent_type identifies the class type of the
target component in which the imp connector object is instantiated. Examples of using these
connector object kinds are given in section 9.3.
The OVM class library defines the following bidirectional connector objects:
ovm_BIOIR_TYPE_imp #(req_trans_type. rsp_trans_type, parent_class_type)
ovm_BIDIR_TYPE_port #(req_trans_type, rsp_trans_type)
ovm_BIOIR_TYPE_export #(req_trans_type, rsp_trans_type)
In the above description, BIDIR_TYPE refers to one of the bidirectional interface types
supported by the library (section 9.4). Parameter req_trans_type identifies the class type ofthe
transaction object moving from the initiator component to the target component. Parameter
rsp_trans_type identifies the class type of the transaction object moving from the target com-
ponent to the initiator component. Examples of using these connector object kinds is given in
section 9.4.
Binding Connector Objects 211
Function connectO is used only to specify which connector objects must be connected. It
does not connect these objects. Function resolve_all_bindingsO of the top level component in
the hierarchy is used after alI connections between connector objects are specified by using
function connectO to actually establish the necessary connections. Note that by default, func-
tion resolve_all_bindingsO is called as part of the elaboration phase of simulation (table 7.1).
As such, calling this function explicitly is only required for specialized implementations
where the predefined simulation phases of OVM are not being used.
The following program shows an example of creating a unidirectional and a bidirec-
tional transaction interface across two components. Note that in this example, the focus is on
the mechanics of implementing the methods supported by a transaction interface and on
binding the connector objects that fonn this transaction interface. Details of unidirectional
and bidirectional interfaces are described in sections 9.3 and 9.4. In addition, the use of trans-
action interfaces in building a verification environment according to the guidelines ofOVM
is shown in chapter J 3.
212 OVM Transaction Interfaces
sion of these imp connector objects in class target requires that the implementation for the
predefined methods of these connector objects also be defined in class target (section 9.3).
The implementation for these predefined methods is given in class target (lines 28-30). The
top level component for this example, top_comp contains io and to, instances of classes initia-
tor and target respectively (lines 34, 35). These instances are initialized in the constructor for
this class (lines 39, 40), and the port and imp connector objects inside each instance are then
connected using function connect(), and according to the ordering rule required for making
such connections (lines 41, 42). The top level component is then instantiated in the top level
module (line 46), initialized (line 49), and all of its connections finalized by calling its pre-
defined function resolve_all_bindingsO (line 50).
different transaction instance. Because of the potential for waiting, only tasks may
call this method.
• try_getO: Returns a new transaction of type TR from the target to the initiator. If a
transaction is immediately available, it is returned in the provided output argument
and function returns 1. Otherwise, the output argument is not modified and 0 is
returned.
• can_getO: Returns 1 if target is ready to return a transaction immediately upon calling
gat() or try_get(). Otherwise it returns O.
• peekO: Returns a new transaction from target without consuming it. Ifa transaction is
available, then it is written to the provided output argument. If a transaction is not
available, the calling thread is blocked until one becomes available. The returned
transaction is not consumed. A subsequent call to peekO or gat() will return the same
transaction. Because of the potential for waiting, only tasks may call this method.
• try_peekO: Returns a new transaction without consuming it. Ifavailable, a transaction
is written to the output argument and 1 is returned. A subsequent call to peekO or gat()
will return the same transaction. If a transaction is not available, the argument
remains unmodified and 0 is returned.
• canJ)eekO: Returns 1 if a new transaction is available, 0 otherwise.
The OVM class library defines the following unidirectional interface types (xxx indi-
cates connector object kind, and can be one of "port", "export", or "imp"):
• ovm_blockin9.-puUcxX
• ovm_nonblocklng_puCxxx
• ovm_puCxxx
• ovm_blocking-'!etyxx
• ovm_nonblocklng_get_xxx
• ovm_get_xxx
• ovm_blocklng_peek_xxx
• ovm_nonblocklng_peek_xxx
• ovm_peek_xxx
• ovm_blocklng_get_peek_xxx
• ovm_nonblocklng_get_peek_xxx
• ovm_get_peek_xxx
The different interface connector object kinds (i.e., port, export, imp) are used to make a
physical connection between components where imp connector objects are instantiated
inside target components and port connector objects are instantiated inside initiator compo-
nents. On the other hand, the different connector object types listed above, define the direc-
tion and semantics of data exchange between the initiator and the target components.
Unidirectional interface types differ in the methods they support. Table 9.2 summarizes
the methods supported by each of these unidirectional interface types. For example, connec-
tor object kinds ovm_puCport, ovm_puUmp, and ovm_put_export support only methods putO,
try_putO, and can_put().
The set of methods supported by each interface type also define the data flow direction
of the interface. For example, peekO is supported by only an imp connector object that is
inside a producer component. Similarly, putO is supported by only an imp connector object
that is inside a consumer component. This means that an ovm_peek_xxx interface is used
when the initiator component is the transaction consumer, while an ovm_put_xxx interface is
used when the initiator component is the transaction producer.
Unidirectional Interfaces 215
~
"'I
><
''"'I"" ''"" ''><"" tJ"
Used
><
''""
_I '" ~I
..,.1~ ...
.. I
~
when TLM
'" ::>
;1''"" "G>
"" ~
"
"
"j ..,..5~ ''""
target Methods :1
::> J j ~
~'I
is
~ ~ J
00
J I '"
:Q
''"" :.;;;'" :g :1''"" :.;;;'" :g ~I :.;;;'" ::0
OJ)
Q Q Q Q
~ ~
0 0 0
Figure 9.3 shows an example of the interaction between a producer and a consumer
component. In this example, both the producer and the consumer can be the transaction initi-
ator. If a transaction is initiated by the producer, then it places a transaction into the con-
sumer component through its put_port. If a transaction is initiated by the consumer
component, then it can either peek into, or get a transaction, from the producer through its
get_peek_port. A level of hierarchy is added to show how connector objects should be used to
route transaction interfaces through hierarchy layers. Because of the inclusion of a puCport
and a get_peek_port, the implementation of this example highlights the use of all predefined
methods listed in table 9.2. The implementation of this example is shown in the remainder of
this section.
Top Level Block
The first step in defining this environment is to define the base transaction type. This
definition is shown in the following program:
The implementation of producer core component highlights the use of connector objects
ovm_put_port, ovm_get_peek_imp and implementation of methods that must be supported by
imp connector object geCpeek_imp. This implementation is shown below:
but is assumed that a call to any of the predefined methods of put_port results in calling the
implementation of that method in the consumer core component. Any call to predefined
methods of a port object connector of type ovm_get_peek_port in another component that is
connected to get_peekJmp of this class, results in the implementation of that method in this
class to be called.
The implementation of consumer core component highlights the use of ovm_puUmp,
ovm_get_peek_portand implementation of methods that must be supported by imp connector
object get_puUmp. This implementation is shown below:
The implementation of class producer (lines 1-14) contains object core which is an
instance of producer_core, port connector object put_port used to connect core.puCport to the
higher layer, and export connector object geCpeek_export used to connect the higher layer to
core.get_peek_imp. These objects are initialized in the class constructor (lines 8-10) and then
appropriate connections are made between connector objects in the local scope and connec-
tor objects in object core (lines II, 12). The implementation of class consumer (lines 16-30)
follows the same flow.
12 endfunction
13 endclass
The implementation of environment top level, shown below, highlights the step for
resolving all bindings made in the hierarchy rooted at a component, and illustrates the use of
a transaction port by calling its predefined method:
Bidirectional interface types have the ability to use a single interface to send one type of
transaction from the transaction initiator to the transaction target, and receive a different kind
of transaction from the transaction target. The types of these transactions are given by the
parameters specified when instantiating bidirectional interface objects (see section 9.1 for the
required parameter list for bidirectional connector objects). Master or slave connector objects
use methods defined for unidirectional interfaces to move transactions between the initiator
and the target on a single interface. Transport connector objects use methods especially intro-
duced for bidirectional transfer of transactions. Table 9.4 summarizes the methods supported
by each of these bidirectional interface types.
A transaction initiator that is a producer contains a master connector object (e.g.,
ovm_blocking_master_port), uses putO methods to move a transaction of type req to the target
component, and uses getO or peekO methods to ask the target component for a transaction of
type rsp. In contrast, a transaction initiator that is a consumer contains a slave connector
object (e.g., ovm_blockin9_slave_port), uses get() or peekO methods to receive a transaction of
type req from the target component, and uses putO methods to return to the target component
a transaction of type rsp.
A transport connector object uses a single method call to send a transaction of type req
and receive a transaction of type rsp. This type of interface can be only used by transaction
initiators that are producers. Master interface types a l10\\' the sending and receiving of trans-
actions to be performed using separate method calls. Transport interface types require that
this exchange be performed using a single method call.
Bidirectional Interfaces 221
''"" ''"'""
'..." 1 ~
><,
''"" 1:;1
'><...'"" 1 '" 0
~ '',,I'""" ,,'> t: 1 ~
5
TLM 0
~ '"E '" ~
Methods i:J
E
oJ 10
U;
~ C
e! ''""
oJ
C
:.;;; '''""" oJ
C
:.;;; '" -I C
~0
'"
t0 l
'~I"
Of)
C '-' ... 1 C u C
0 0
:.;;; :;;; :;;; ~
:0 ~ :0 u :0
'"0 C
0
'" "0
:0
C
0
>
'" .9 0
C C
e!
~I
:0 1 ci
~I
ci
~I
.0
\
E
> > >
13 1
>
E
> >
a> E
>
;1
>
0 0 0 0 0 0 0 0 0
Figure 9.4 shows an example of interaction between a producer and a consumer compo-
nent. In this example, the producer is the transaction initiator. The producer component con-
tains two bidirectional ports of types ovm_transport_port and ovm_blocking_master_port. In
both these connections, the producer component is the transaction initiator. The difference
between these two interfaces is that in the ovm_transport interface, the sending of the request
transaction and the receiving of a response transaction occurs with one method call, where in
the ovm_blocking_master interface, this exchange occurs using separate method calls. The
implementation of this example is shown in the remainder of this section.
The first step in implementing this environment is to define the base request and
response transaction types. This definition is shown in the following program:
19 wait_until_next_request_can_be_acceptedO;
20 handle_request(req_tr);
21 wait_until_responseJs_readyO:
22 rsp_tr = get_response();
23 endtask
24
25 function bit nb_transport(input requesUr req_tr, output response_tr rsp_tr);
26 if (must_waiUor_response(req_tr» return 0;
27 handle_request(req_tr);
28 rsp_tr = get_response();
29 return 1;
30 endfunction
31
32 task put(input requesUr req_tr);
33 wait_until_next_requesl_can_be_accepledO;
34 handle_request(req_Ir);
35 endtask
36
37 task get(output response_tr rsp_tr);
38 wail_untiUesponse_is_readyO;
39 rsp_tr = get_responseO;
40 endtask
41
42 task peek(output response_tr rsp_tr);
43 wait_untiUesponse_is _read yO;
44 =
rsp_tr peek_at_responseO:
45 endtask
46 endclass
11 prod.b_master_port.connect(cons.b_masterJmp);
12 endfunction
13 endclass
The implementation of environment top level, shown below, highlights the step for
resolving all bindings made in the hierarchy rooted at a component, and illustrates the use of
a transaction port:
One common use model for this type of connection is when a monitor component col-
lects packets from a DUV port, and then broadcasts these collected packets. The broadcasted
packet can then be received and processed by mUltiple components requiring access to this
packet (e.g., scoreboard).
The OVM class library provides the analysis interface for implementing this type of
broadcast connectivity. The analysis interface supports the following connector object kinds:
ovm_analysis_port #(tUype)
ovm_analysis_export #(tUype)
ovm_analysisjmp #(tUype, imp_parenUype)
In this syntax, tUype gives the type of broadcasted transaction, and imp_parent_type
gives the class type for the component in which the imp connector object is placed.
An analysis interface supports only a single method given by the following function
prototype:
fu nctlon void write(input tUype tr);
The first step in definii1g this environment is to define the base request and response
transaction types. This definition is shown in the following program:
226 OVM Transaction Interfaces
The implementation of component subscriber highlights the use of analysis imp connec-
tor object ovm_analysis_imp, a.nd the implementation of its predefined function writeO. This
implementation is shown below:
1
'Program 9.16: Analysis example top component implementation
class analysis_exam pie_top extends ovm_component;
-----
2 broadcaster bcaster;
3 subscriber scribers(3);
4
5 function new(string name, ovm_component parent);
6 super.new(name, parent);
7 mon = new("mon", this);
8 for (int i = 0; i <= 3; i++) begin
9 string inst_name; $sformat(inst_name, "ms[%Od)", i);
10 =
scribers[i) new(inst_name,this);
11 bcaster.analysis_port.connect(scribers[i].analysisjmp);
12 end
13 endfunction
14 endclass
Macro for Declaring New Imp Type Imp Connector Type Name
I 'ovm_ blocking--.put_1mp_ decl( SFX) ovm_ blockmg--.puUmp_ SFX
'ovm_ nonblocking--.puUmp _ decl(SFX) ovm_ nonblocking--.puUmp_ SFX
'ovm--.puUmp_ decl(SFX) ovm--.puUmp_SFX
'ovm_blocking~et_imp_decl(SFX) ovm_ blocking~eUmp_ SFX
'ovm_ nonblocking~eUmp _ decl(SFX) ovm_ nonblockinLgeUmp_ SFX
'ovm_geUmp_ deC\(SFX) ovm _geUmp _ SFX
'ovm_blocking_peek jmp_decl(SFX) ovm_ blocking--'peek_imp _ SFX
'ovm_ nonblocking--'peek_imp_ decl(SFX) ovm _ nonblocking_peekjmp_ SFX
'ovm_peek_imp_decl(SFX) ovm--'peek_imp_SFX
'ovm_ blockinLgetyeek_imp_decl(SFX) ovm_blockinLget--'peekjmp_SFX
'ovm_nonblocking~etJleek_imp_decl(SFX) ovm_ nonblocking_getJleek jmp_ SFX
'ovm_getJleekjmp_ decl(SFX) ovm _getyeek_imp_ SFX
'ovm_ blocking_master_imp _decl(SFX) ovm_ blocking_master_imp _ SFX
'ovm_nonblocking_master_imp_decl(SFX) ovm_ nonblocking_master_imp_ SFX
'ovm_master_imp_decl(SFX) ovm_master_imp_SFX
'ovm_blocking_slave_imp_ decl(SFX) ovm_blocking_slave_imp_SFX
'ovm_non blocking_ slave_imp decl(SFX) ovm_nonblocking_slave_imp_SFX
'ovDl_slave_imp_decl(SFX) ovm_slave_imp_SFX
'ovm_ blocking_transporUmp _decl(SFX) ovm_blocking_transport_imp_SFX
'ovm_ non_ blocking_transport_imp _decl(SFX) ovm_non_blocking_transport_imp_SFX
'ovm_transportjrnp_ decl(SFX) ovrn_transport_imp_SFX
'ovm__analysis_imp _decl(SFX) ovm analysis imp SFX
Table 9.5: Macros for Defining New Imp ConneCtor Object Types
The use of this approach to include multiple imp connector objects in one component is
shown in the following program fragment:
Transaction Channels 229
1 'ovm_blocking_puUmp_declUcv)
2 'ovm_blocking_puUmp_decILxmt)
3
4 class rcv_xmCcomponent #(type T=int) extends ovm_component;
5 ovm_blocking_puUmp_rcv #(T) rcvJmp;
6 ovm_blocklng_puUmp_xmt #(T) xmUmp;
7
8 function void puUcv(input T t);
9
10
11 endfunction
12
13 function void put_xmt(input T t);
14
15
16 endfunction
17 endclass
9. 7 Transaction Channels
Transaction interfaces described so far in this chapter, are passive in nature. This means that
the interface simply acts as a conduit of transactions, and one side of the interface must act as
the transaction initiator and the other side must act as the target providing the implementa-
tion of methods that are called by the transaction initiator.
Introducing a communication channel that connects a producer and consumer allows
components on both sides of the interface to both be either the transaction initiator or the tar-
get. In this configuration, the communication channel would hold the infrastructure that is
necessary for managing the transfer oftransactions from the producer to the consumer. Table
9.6 shows a summary of the roles a transaction producer and consumer can take in exchang-
ing transactions.
230 DVM Transaction Interfaces
Rows 1 and 2 summarize the case where one of the two communicating components is
the initiator and the other side is the target. These two cases are implemented using transac-
tion interfaces described in chapter 9.
Row 3 summarizes the case where both the producer and the consumer are initiators. In
this case, a producer decides when to pass its transaction to the channel, and a consumer
decides when to take that transaction from the channel. In this configuration, the channel is
passive (i.e., does not decide when transfers occur), and must have the ability to store trans-
actions received from the producer.
Row 4 summarizes the case where both the producer and consumer are targets. In this
case, the channel is active (i.e., decides transaction transfer timing), and can either be a
pass-through or store the transactions received from the producer.
Figure 9.6 shows the architectural view ofa passive transaction channel. As shown, the
producer and consumer contain port connector objects, and the channel contains the imp
connector objects. An important benefit derived from this configuration is that the imple-
mentation of all methods required by the imp connector objects must be provided as part of
the channel. This means that once a passive channel is implemented for a given transaction
type, there is no longer a need to provide method implementations inside either the producer
or the consumer, effectively eliminating a major part of the effort required for building a
transaction-based connection. So, given a passive channel for a given transaction type,
implementing the connection between a producer and consumer is as simple of instantiating
port connector objects inside each component and connecting them to the channel.
Passive channels are more useful than active channels. The reason is that passive chan-
nels eliminate the need for providing method implementations in the producer and consumer
components, but at the same time, leave the timing oftransaction transfer to the producer and
consumer components. Active channels, however, require that method implementations be
Transaction Channels 231
provided by the producer and the consumer, while taking control of the timing for transaction
movement. Clearly, a passive channel is a more useful structure for connecting components.
The following passive channel types are supported by the OVM class library:
• TLMFIFO
• Analysis FIFO
• Request/response channel
These channel types are described in the following subsections.
blockin9_get_export
nonblocking_get_export
blocking_put_export
get_export
nonblocking_puCexport
blockingyeek_export
pucexport
nonblocking_peek_export
peek_export
blocking..Jjetyeek_export
ovm_nonblocking..J1et_peekJmp nonblocking_get_peek_export
get_peek_export
Slow Subscribers
} (transaction initiators)
Fast Subscribers
'--_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ~ } (transaction targets)
The OVM class library prm'ides class tim_analysis_fifo for modeling an analysis FIFO.
This class is implemented by adding analysis imp connector object analysis_export of type
ovm_analysisJmp to an unbounded TLM FIFO channel. The implementation of function
Transaction Channels 233
write() for this component writes the incoming transaction into the TLM FIFO. A producer is
then connected to analysis_export, and subscribers are connected to put_ap.
PART 4
Randomization Engine
and Data Modeling
236
CHAPTER 10 Constrained Random
Generation
1. The reachable set RS("'\'2'''\n) for random ,'ariables \',_ "2. ,.. \'n' is defined as the set of all valid
simultaneous value 3s,ignmenrs to these variables that satisfy a set of randomization constraints.
Random Generators and Constraint Solvers 239
ues to V1 and v2 one at a time, but instead picks a value from the reachable set. In this case, the
random generation engine picks a value from the set {(O,O),(O,1),(1,1)} with equal probability of
1/3 for each possible value.
It is instructive to compute the probability of variables V1 and V2 taking different values. It can
be seen from RS(V1,V2) that v1=1 for one out of three choices in the reachable set giving
prob(v1==1)=1/3, and v2=1 for two out of three choices in the reachable set giving
prob(vr=1)=2/3. In summary:
prob(v1==1)=1/3 prob(v1.v2==OO)=1/3 prob(v1.v2==10)=0
prob(v2==1 )=2/3 prob(v1.V2==01 )=1/3 prob(v1' v2==11 )=1/3
The above example is simple. In practice, constraints are more complex and may in fact
lead to cases where a variable has to be decided before the reachable set for another variable
can be computed. Consider the following randomization problem:
Example 10.2: Assign random values to bits V1 and V2 subject to constraint:
v2 >=' myfunc(v1)
In this example, the relationship between V1 and v2 cannot be immediately understood by sim-
ply looking at the given constraint, since function myfunc() may implement any function of vari-
able V1' In such cases, the value for variable v1 must be known before the constraint for variable
V2 can be considered. This requirement imposes an ordering for the assignment of random values
to variables V1 and V2, where V1 must be assigned a random value before V2 is assigned a random
value.
For simplicity, assume myfunc() returns the value of its argument. Therefore, this constraint is
in reality the same as v2>=v1' As mentioned, V1 must be assigned a value before the constraint for
V2 can be solved, therefore the focus is to first do random assignment for V1'
First, the reachable set for variable V1 is computed. No constraints apply exclusively to Vlo
therefore RS(v1), the reachable set for V1 is the set of all possible values for V1 which is the set
{0,1}. The random generation engine then randomly picks a value for V1 with equal probability
from RS(v1}'
Next, depending on the value assigned to V1, the reachable set for variable V2 is computed. If
V1 is assigned a 0, then RS(v2)={0,1}. Ifv1 is assigned a 1, then RS(v2)={1}.
It is instructive to compute the probability of a variable or a set of variables being assigned
each of their possible single or combined values. Variable v1 is assigned first where a value is
selected with equal probability from the set {0,1}, therefore prob(v1==1)=112. Ifv1 is set to 0, then
V2 may be assigned either a value of 0 or 1 with equal probability of 1/2 each, making
prob(v1.v2==OO)=1/4, and prob(v1.v2==01)=1/4. If v1 is set to 1, then V2 must be assigned to 1,
making prob(v1.v2==11)=112. Considering the probability for each combination, gives
prob(v2==1)=3/4. In summary:
prob(v1==1)=112 prob(v1.v2==OO)=114 prob(v1.V2==10)=O
prob(v2==1 )=3/4 prob(v1.V2==01)=1/4 prob{v1.V2==11)=1/2
As can be seen from this example, ordering of variables led to completely different vari-
able probabilities than those in example 10.1 because of the use of a function in describing a
constraint. Having a good sense for the probability distributions of a set of randomly gener-
ated variables. therefore, requires good understanding of the orderings that may be imposed
during the generation process.
Variable ordering that is imposed because of constraints can cause the random genera-
tion engine to fail when there is in fact a viable random assignment of variables that meets
the specified constraints. Consider the following example:
240 Constrained Random Generation
ability. Note that the process described in this example can easily be extended to mUltiple
variables (e.g., assign V1 and V2 before V3 and V4)'
Examples provided in this section identify three modes of operation for a constrained
random generator:
• Non-ordered: No variable ordering
• Assignment-based ordered: Variable ordering only during random value assignment
• Constraint-based ordered: Variable ordering during constraint solving (and hence
during value assignment)
A constrained random generation engine should optimally be operating in non-ordered
mode, with selective use of assignment-ordered mode to control random value probabilities.
Constraint-ordered mode will be imposed by some constraints and at times cannot be
avoided. The following sections describe the random generation utilities of SystemVerilog
with these properties and behaviors in mind.
The first step in solving the randomization problem is to rule out any circular dependen-
cies in any constraint orderings. Consider the following example:
Example 10.5: Assign random \'alues to bits V1 and V2 subject to constraints:
V1 > myfunc(v2)
solve v, before v2
The set of constraints for this example leads to a circular dependency. The first constraint
leads to a constraint-based ordering of\'ariables where the reachable set for V2 must be computed
242 Constrained Random Generation
first and Y2 assigned a value before Y1 can be assigned a value. The second constraint leads to an
assignment-based ordering requiring that v1 is assigned a value before Y2. An error message will
be generated when such circular dependencies are detected.
The next step is to partition the random variables into ordered groups of variables where
the variables in each group must be assigned values before variables in the following group
can be processed. Constraint-based ordering is used to identify these partitions. Consider the
following example:
Example 10.6: Assign random values to 2-bit V1, Y2, V3, V4, Vs subject to:
1: Y3 > myfunc(v2)
2: v3 > myfunc(v4)
3: Ys > myfunc(v4)
4: Y1 < v3
5: v2 <v4
6: solve v3 before V1
A general guideline for this grouping is to postpone assigning a variable to as late a time as
possible. The goal here is to increase the flexibility in assigning values to variables in the later
groups. Based on this guideline, and the constraint-based required ordering of assigning V2 before
V3, a~signing v4 before Y3, and assigning V4 before vs, the ordered groups obtained for this exam-
ple are: {V2,V4} followed by {V1,V3,vS}. Note that V1 is not involved in any required ordering so it
is placed in the last group.
Given a group G (e.g., group {V2,V4} in example 10.6), the next step is to identify the set
of constraints that must be considered for the variables in this group. Any constraint that
includes at least one reference to variables in this group, and depends on constants and state
variables 2 is included. Constraints that include variables that have not yet been processed are
excluded from this set. In the previous example, only constraint 5 is considered for group
{V2' V4}. Other constraints depend on variables that are not yet processed. Note that when
processing group {V1> Y3, Ys}, the set of constraints will include constraints 1, 2, 3, and 4.
Also note that when this second group is being processed, values for V2 and V4 have already
been assigned and as such, V2 and V4 are considered as state variables when group {V1, V3, vs}
is being processed.
The next step is to compute the reachable set for the set of variables in each group G of
all groups identified for a randomization problem. Continuing with the results in example
10.6, the following two examples show this calculation for group {V2, v4} solved along with
its relevant constraint 5, and group {V1' V3, vs} solved along with its relevant constraints I, 2,
3, and 4.
Example 10.7: Assign random values to 2-bit integers v2, V4 subject to constraint:
5: v2 < v4
In this case RS(V2.v4)={(0.1).(O,2).(O.3).(1.2).(1.3).(2.3)}.
Assume we select member (0.1), thereby setting v2=O and v4=1. Note that a peek into the
requirements for the next group shows that \"iable assignments to \·ariables Y1, V3, and Ys are only
possible for Y2.v4={(O.1).(O.2).(1.2)}. As such. during the processing of this group, selecting any
~" State I"ariables in the context of randomization cl'nstraims reters to \"ariables that have been assigned .\
\alue when the Cllrrent random assignment iteration starts and as such whose ,"alue can be conside;ed a
constant.
Randomization In SystemVerllog 243
member of set v2,v4={(O,3),(1,3),(2,3)} will lead to generation failure when processing the next
group of random variables.
Example 10.8: Assign random values to 2-bit integers V1, V3, Vs subject to constraints:
I: v3> 0
2: v3> I
3: vs> I
4: v1 < V3
This randomization problem is for the second group of variables assuming that v2=O and v4=1,
and that function myfunc() returns the value of its argument.
In this case, RS(v1,v3.vS)={(O,2,2), (0,2,3), (0,3,2), (0,3,3), (1,2,2), (1,2,3), (1,3,2), (1,3,3), (2,3,2),
(2,3,3)}.
The next step is to order the variables in each group according to assignment-based
ordering requirements. Group {V2,V4} does not have any assignment-based ordering require-
ments. Group {V1,V3,VS} does, however, require that V3 is assigned a value before Vs. This
operation is shown in the following example:
Example 10.9: Assign random values to 2-bit integers Vlo V3, Vs with the reachable set com-
puted in example 10.8, and constraint:
solve V3 before V1
The variable assignment groups are identified as {vu and then {v1,VS} where Vs is placed in
the last group in order to leave maximum flexibility in assigning values to later variables. Given
that variable V3 is to be assigned first and RS(V1,v3'vS), then RS(v3)={2,3}. Variable V3 is assigned
to a randomly selected member of RS(v3) before considering assignment for group {Vl,VS}'
Ifv3=2, then RS(V1.VS)={(O,2),(O,3),(1 ,2),(1 ,3)}.
If v3=3, then RS(v1.vS)={(O,2),(O,3),(1 ,2),(1 ,3),(2,2),(2,3)}.
Once a value is assigned to V3 and RS(Vl'vS) is computed, a member is selected randomly
from RS(v1'vS) to simultaneously assign values to variable Vl and Vs.
Without a constraint-based or assignment-based ordering requirement for the example
solved in this section, the reachable set for all variables would be computed at the same time,
and an entry selected from this set to simultaneously assign values to all random variables.
The examples in this section also highlight the fact that constraint-based ordering
requirements can lead to the possibility of assigning values to V2 and V4 that could have
resulted in generation failure. In addition, constraint-based and assignment-based orderings
led to different random probability distributions for each variable in this example.
10.2
--
Randomization
_-...
in System Verilog -----
In ~eneral, any integral variable in the scope of a module, program, or interface can be
randomIzed by calling the system function randomize(). In the example above, variable
scop~_int_var and module_var are randomized by passing them as arguments to the system
functIOn randomize(). This function retums a 1 upon success and a 0 upon failure.
~s mentioned, earlier, randomization is best leveraged when random variables are pack-
aged m a class object. Class rand_class (lines 3-11) contains variables that are marked as
rultd or runde. As shown in the class definition, variables of integral type can be random-
One possible set of values generated for this object for consecutive calls to randomizeO
is shown in the comment section of the above example. As shown, values generated for
randc_var (line 3) are random but every possible value for this variable is generated before a
new iteration of values is started. Values generated for rand_var may be any of the values in
the range of {O:3}, for example, value 1 is generated two times before value 3 is generated
(line 2). The permutation sequence for a randc variable is recomputed when constraints on
that variable change, or none of the remaining values satisfies the constraints on that vari-
able. The behavior of a randc variable is similar to that of dealing every card in a shuffled
deck of cards before it is shuffled again. The behavior of a rand variable is similar to that of
throwing a dice where at every roll, all possible values may appear. It should be noted that
using a randc variable that has a large set of possible values makes the constraint problem
more difficult to solve.
Variables marked with randc have a strict requirement to go through all possible values
before a generated yaille can be repeated. Becallse of this requirement, such variables are
always generated before variables marked with rand. Also, for dynamic arrays, the size of
the array must be generated before array elements can be randomized. As such, the implicit
variable that defines the size of a dynamic array is generated before the elements of that
246 Constrained Random Generation
----------------------
array. This requirement imposes an implicit constraint-based ordering between the size of a
dynamic array and the elements of that dynamic array.
The following properties apply to random variables:
• Any integral variable in the scope of a module, program, or interface can be random-
ized by calling system function randomizeO.
• Random variables are best packaged in a class, since they can be grouped along with
their constraints and reused.
• Class members marked with rande are assigned values that cycle through all of the
values in a random permutation of their declared range. Such variables can be of type
hit (or packed bit array) or an enumerated type.
• Class members marked with rand are assigned a value (possibly repeating) from their
valid range, where this value has a uniform distribution over its valid range subject to
randomization constraints that must be met.
• An object pointer in a class can be marked with rand in which case all of this object
pointer's variables and constraints are solved and randomized concurrently with the
variables and constraints of the object that contains the pointer.
• A null object pointer in a class marked with rand is ignored by the randomization
engine.
• When randomizing a class object, an object pointer in that class not marked with
rand will not be randomized, even if it has members that are marked as rand or
randc.
• Variables marked as ronde are assigned before variables marked as rand.
Random dynamic arrays introduce special constraint-based ordering and also require
special considerations when pointing to class objects. Random dynamic arrays and con-
straint-based random generation are discussed in the following sections.
3 class box;
4 rand box_type bt;
5 rand int unsigned width;
6 rand int unsigned height;
7 constraint valid_range;
8 constraint short_box {bt == short -> height < 10;}
9 constraint average_box { bt == average -> height inside {[30:70)};}
10 constraint tali_box {bt== lall-> height> 90;}
11 endclass
12 constraint box::valid_range {width < 100 && height < 100;}
13
14 class short box extends box;
15 constraint sb { bt==
shoJi;}
16 endclass
17
18 class short and wide box extends short box;
19 constraintswb (width" 90 :} -
20 constraint short_box {bl ==
short -> height < 20;}
21 endclass
22
23 initial begin
24 short_and_wide_box swb:
25 short box sb;
26 swb ;; new();
27 sb = newO:
28 assert(sb.randomize());
29 assert(sb.randomizeO with {width == 95;});
30 assert(swb.randomizeO);
31 end
32 endmodule
The above example shows a hierarchy of constraints used to implement an object with
properties in different range of values. Class box (lines 3-11) contains the generic description
of a box and constraint blocks describing the valid ranges of its width and height properties.
It also contains implication constraints constraining the height property, depending on the
box type (lines 8-10). A constraint box can be declared outside a class declaration. Con-
straint name validJange is first declared as part of class declaration (line 7) and its descrip-
tion is then specified outside the class declaration (line 12).
Class short_box is a class inherited from box and contains only an additional constraint
that the generated box type property should be constrained to short. Class short_and_wide_box
further extends this definition to include a constraint for its width property. It also overrides
the short_box constraint defined in its parent class in order to redefine the range for a short
box.
Random values generated for variable sb (line 28) will meet all constraint blocks on
lines 8,9, 10, 12, and 15. The randomizeO function can be called with in-lined constraints
(line 29). In this example, the random values generated for variable sb (line 29) will meet the
additional constraint defined using with keyword. Random values generated for variable swb
(line 30) will meet all constraint blocks on lines 9, 10, 12, 15, 19 and 20.
Constraint-specific operators are discussed in the next section.
Constraint-Specific Operators 249
A constraint can be any System Verilog expression with integral variables and constants, or
one of the constraint-specific operators. Constraint-specific operators. are described in the
following subsections.
In the above example, values 2 and 3 are specified two times in the set description. Each
value in the set (i.e., 1, 2, 3, 4), however, is assigned with equal probability.
Examples of this operator are shown below:
Non-random variables used in specifying the arguments to the inside operator are con-
sidered as state variables whose value at the time of randomization is used to solve the con-
straint.
250 Constrained Random Generation
The distribution operator cannot be used with variables marked with randc. Also, a dis-
tribution expression must contain at least one variable marked with rand.
The distribution operator is a bidirectional constraint and does not impose any ordering
on the random generation process.
An example of distribution operator are shown below:
In the above example, value 1 has a weight of 1, value 2 has a weight of 3, values 4 and
5each have a weight of 5, and values in the range b to b+2 each have a weight equal to the
number of values in this range divided by 3.
As such, the implication operator is a bidirectional operator that does not impose any
ordering on the random generation process.
Constralnt·Specific Operators 251
In the above example, the class declaration of a packet contains implication constraints
deciding the packet size depending on the packet enumerated type of short, legal, or long. The
payload size is also constrained to be set by variable packeUength. This implementation
allows packet type to be decided when the packet content is being randomized (line 16).
As such, if-else operator is also a bidirectional operator that does not impose any order-
ing on the generation order.
The if-else operator provides a more structured look for writing long constraint sets and
also provides a more efficient way by relating constraint sets to the value of a single boolean
expression.
The following example, shows the packet declaration in the previous example rewritten
using an if-else constraint.
9 else if (pt==legal)
10 packeUength inside {[64:1024]};
11 else if (pt==long)
12 packeUength > 1024;
13 }
14 endclass
The drawback with the above approach, however, is that individual constraints for short,
legal, and long packets cannot be disabled as can be done when separate constraint blocks are
given for each constraint.
The above example shows the relationship between index variables (i.e., ij,k,l,m) and
the dimension of the array variable.
The following properties apply to iterative constraints:
• The number of loop variables must not exceed the number of array dimensions.
• The scope of each loop variable is the context of the foreach construct (line 6).
• The type of each loop variable is implicitly declared to be consistent with the type of
array index.
• A loop variable can be skipped by leaving its place in the order of loop variables
blank (hence two commas appearing back to back), in which case, that dimension is
not iterated over.
Size of an array can be used in iterative constraints. This usage is shown in the follow-
ing example:
Recall that the use of a constraint on the size of dynamic array causes an implicit con-
straint-based variable ordering where the size of array is decided before any of the con-
straints on its members are solved. As such, when solving constraint c2 (line 5), the size of
array is used as a state variable and is assumed to be a fixed value.
Iterative constraints provide a shorthand notation for describing relationships between
array members and other random variables. As such, they do not impose any ordering on the
random generation order unless the constraint specified in the foreach constraint block
imposes such an ordering (i.e., using a function call in specifying the constraint) or using the
array size predicate as described in the previous paragraph.
In this example, constraint block C1 (line 12) describes a relationship between the length
parameter of two objects contained inside its class. As such, this constraint is aglobal con-
straint.
The call to ralldomizeO (line 22) solves the constraints for all variables in all involved
objects concurrently. The set of variables to randomize and applicable constraints are derived
as follows:
• First all objects that are to be randomized are found. The set of objects starts with the
254 Constrained Random Generation
object whose randomizeO method is called and any object with a rand pointer
declared inside that object (lines 10, 11). This definition is recursive in that any rand
pointers inside the newly identified object are also added to the set.
o All variables marked with rand or rande inside the identified objects are added to the
list of variables to be randomized. Any variable that is disabled using rand_modeO
function (section 10.5.2) are removed from the set of variables to be randomized.
o All constraints in the identified objects that relate to the identified variables to be ran-
domized are included in the set of constraints to be solved. Any constraints specified
along with the randomizeO function is added to this list. And any constraints dis-
abled using constrainLmodeO function (section 10.5.1) is removed from this list.
Once the set of all random variables and relevant constraints are identified, all con-
straints are solved concurrently, subject to any implicit/explicit constraint-based or assign-
ment-based orderings.
In the above example, the reachable set for variables V1 and V2 is given by set {(O,O),
If both V1 and V2 are assigned a member from this set with equal probability,
(1,O), ... (1,2 32 _1)}.
then the probability of setting V1 to 0 is 1fe2+1) which will almost never occur. By adding con-
straint C2 (line 5), variable Vi is assigned a value first from the possible values in the reach-
able set of V1'V2' This means that both values 0 and 1 will be assigned to Vi with equal
probability. After V1 is assigned, then V2 is assigned a value from the newly computed reach-
able set for V2' This means that if Vi is set to 0, then the reachable set for V2 is {O}, and if Vi is
assigned a 1, then the reachable set for V2 is {O,1 •... 232 _1}. In either case, a value is selected
from the reachable set for V2 with equal probability.
The following properties apply to ordering constraints:
o Only variables marked with rand can be used in an ordering constraint.
• Variables marked with rande are not allowed in ordering constraints. As mentioned
earlier, these variables are always assigned before any variable marked with ralld.
o Explicit ordering constraints, along with other implicit ordering constraints should
not lead to circular dependencies (e.g., solve a before b, solve b before c. solw c
before a is not allowed).
Constraint Guards 255
This example shows the implementation of class packet composed of an array of payload
objects. Field length of payload specifies the size of its data payload, and field tlength of pay-
load gives the total length of all payloads in its containing packet before and including this
payload. Field length of packet specifies the number of payloads that are contained in packet
(active payloads), and the field tlength of packet gives the total length of all active payloads.
Note that field length of class packet is not a random variable.
Constraint blocks in packet class declaration (lines 14-17) provide the relationships that
must be maintained between class object fields during randomization. Constraint C1 specifies
that for the first payload in dynamic array pi, the field tlength should be equal to field length.
Constraint block C2 specifies that for payloads after the first payload, the tlength field is com-
puted by adding the length field for that payload, and the tlength field for the previous pay-
load. Constraint Ca specifies that the tlength field of packet should be set to the tlength field of
the last active payload. Constraint C4 specifies that the payload size for all inactive payloads
should be set to O.
Each of these constraints contain a guard expression. For constraint Ch C2, Ca, and C4, the
guard expressions are (i==O), (1)0), (i==length), and (1)length) respectively. Note that these guard
expressions are composed of either loop variables, constants, or state variables (i.e., length).
Controlling Constrained Randomization 257
The generic model of a random object, as described in its class declaration, is not applicable
to all randomization uses of that object. The randomization utility should provide constructs
that allow for modification of the default randomization assumptions as defined in a class
declaration. These constructs should handle the following requirements:
• It may be needed to disable or enable a constraint block for a specific use of a ran-
domized object.
• It may be needed to prevent the randomization of a variable already marked with
rand or randc.
• Building a hierarchical random object requires that the object hierarchy be prepared
for randomization before randomization starts, and final steps carried out after ran-
domization takes place. Hook methods should be provided for programming these
steps as part of randomization.
These topics are discussed in the following subsections.
The task is used to change the active state of a constraint block. The function is used to
query the current status of a constraint block. The use of these methods is shown in the fol-
lowing example:
20 =
result b1.valid area.constraint modeO; /I result = 0;
21 =
result b2.valid- area.constraint-modeO: /I result = 0;
22 b1.shorl_box.constrainCmode(0): /I turn off short_box in b1
23 result = b1.short_box.constraint_modeO; /I result = 0;
24 result = b2.short_box.constraint_mode(); /I result = 1;
25 b1.constraint_mode(1 ); /I turn on all constraints in b 1
26 =
result b1.short_box.constraint_modeO; =
/I result 1;
27 result = b1.valid_area.constraint_mode(); =
/I result 1;
28 result = b2.valid_area.constrainCmodeO; 1/ result = 1;
29 end
30 end module
The initial active state of constraint block valid_area for both fields b 1 and b2 of box
object is printed on lines 17 and 18, respectively. Note that valid_area is marked as a static
constraint. The active state of this constraint block is set to off for field b 1 on line 19. This
change will affect not only instance b1, but also instance b 2, as shown on lines 20 and 21. On
line 22, constraint block short_boX of b1 is set to off. This change, however, does not affect
instance b 2 since this constraint block is not marked as static. All constraint blocks of
instance b 1 are set to active state on line 25. Note that this change also affects constraint
block valid_state which is a static constraint. Therefore the statement on line 25 also affects
instance b 2 • This is shown on lines 26-28.
Task rand_fflodeO is used to change the active state of a random variable. Function
rand_modeO is used to query the current status of a random variable. The following proper-
ties apply to this usage:
• For unpacked array variables, a single array element or the entire array can be used
with these methods. Using an index limits the method to a single element ofthe array.
Omitting the index applies the method to all elements of the array.
• For unpacked structure variables, individual members of the structure can be used
with these methods. Specifying a member limits the method to that member. Omit-
ting any member name applies the method to all elements of the structure.
• A compiler error will be generated if the specified variable is not marked with either
rand or randc.
• The function fonn of rand_11IodeO does not accept arrays. As such, if the random
\'ariable is an unpacked array, then a member of that array should be specified when
using this function.
Consider the following initial block in combination \\ith the object declaration in pro-
gram example 10.15:
Controlling Constrained Randomization 259
The initial active state for random variables bt and et are printed on lines 6 and 7. The
statement on line 8 deactivates all random variables in instance b 1. Lines 9 and 10 show that
the active states for bt and et are changed to off. The statement on line 11 activates random
variable bt. The result of this activation is shown on lines 12 and 13 to only affect random
variable bt.
The following pseudo-codes describe two functions that are called before the random-
ization is started, and after randomization is completed:
260 Constrained Random Generation
Note that these functions are only meant to provide an overview of the order of execu-
tion for hierarchical objects and actual implementation will be different from what is shown
above. As shown in the pseudo-codes, the pre_randomzieO and posCrandomizeO functions
are called in a depth-first order where the function for each object is called before its parent
function is called.
A typical use ofpreJandomize() is in preparing an object containing a dynamic array
of class objects for randomization. In this case, randomizing each array member requires that
the array position already points at an object (section 10.2.2). The randomization problem
with this setup is that each member of a such a dynamic array should be allocated explicitly
before it can be randomized. This creates a conflict, since the size of the array is not known
before randomization and, therefore, it is not clear how to allocate the dynamic array before
randomization starts. The strategy is to assume a maximum size for the dynamic array, ini-
tialize the dynamic array size to this maximum size in function pre_randomize() and then
constrain the size of dynamic array to the randomly generated size that is less than the maxi-
mum size, so that after the array is resized during randomization, all its members point to
allocated objects that will be randomized.
A typical use of posCrandomizeO function is in computing packet properties based on
the randomly generated values. Checksum fields are one such field that must be computed
after randomization is completed.
The following example shows the implementation of these behaviors using the random-
ization methods.
I
Program 10.19: Example of using pre_randomize() and post_randomizeO
1 class payload; bit pi; endclass
2 class packet;
3 rand payload packeCpayload [];
4 rand byte size;
5 =
int max size 100;
6 byte erc;
7 constraint cO {size> 0 && size <= max_size;}
8 constraint c1 {packet_payload.size() ==
size;}
9
10 function void pre_randomizeO;
11 =
packet_payload new[max_size];
12 =
for (int i 0; i < max_size; i++) packet_payload[i] =new:
13 endfunction
14
15 function void post_randomize();
16 crc = 0;
Random Stability 261
One of the most important requirements for a randomized program is that it should predict-
ably generate the same sequence of random values across multiple runs of the same program
and using the same random seed value. If this property is not maintained, then such a pro-
gram becomes impossible to debug, and impossible to quantify as a contributor to the overall
verification progress. In addition. to this requirement, it is also desirable that incremental
changes in a program do not drastically modify its random behavior. Random stability refers
to this property of a randomized program and steps taken to minimize variations in program
behavior because of localized changes to the program.
Execution of a SystemVerilog program consists of concurrently running threads of exe-
cution, operating on multiple instances of class objects. System Veri log takes advantage of
this architecture for minimizing the effects of incremental program changes on random
behavior of the program by localizing random generation to each thread of execution and
each instance of class objects. In this approach, the sequence of generated random numbers
in each thread or each object is controlled by only its initial seed value. This means that the
creation of new threads or instantiating new objects has minimal effect on the random behav-
ior of previously existing threads and instances. In SystemVeri log, Random Stability refers
to this property.
Each time a new thread or object is created, a new random number generator context is
created and assigned to the newly created thread or object. The seed for this new randomiza-
tion context is set from the next available random number from the thread that created that
new thread or object. Given this approach, random stability in SystemVerilog is discussed in
terms of the following properties:
• Thread stability
• Object stability
Thread stabilitv is achieved by creating a new random generation context for each
newly created thread. This behavior is shown in the following example:
3 initial begin
4 int x, y, z:
5 I/$display ($urandom):
6 fork
7 begin $display ($urandom,,$urandom): end
8 begin $display ($urandom,,$urandom): end
9 begin $display ($urandom,,$urandom); end
10 begin $display ($urandom,,$urandom): end
11 begin $display ($urandom,,$urandom): end
12 join
13 end
14 endmodule
In the above example, the initial block is assigned a new random generation context
when it is created. Also, each branch of the fork statement is assigned a new random genera-
tion context when that thread is created, where the seed for each new thread is taken from the
next available random number from its parent thread (i.e" the thread for the initial block).
Note that the seed assigned to each sub-thread of the fork is independent of the order of exe-
cution of these sub-threads. This is because the order of execution is independent ofthe order
.of creating these threads; order of thread creation follows program order. Because of the
properties described, the above program example generates the same random values for
every execution of this program.
Random stability of above program segment can be affected in two ways. First, if the
display statement on line 5 is un-commented, then the seed for each thread started by the fork
statement becomes different and as such, the program will generate a completely different set
of values for each of the display statements. Second, if the initial block on line 2 is un-com-
mented, then the seed value assigned to the initial block on line 3 will become different,
therefore causing the program to display completely different random values,
The solution to above problems is to self-seed each thread if the values to be generated
by that thread are important to remain the same when the thread is moved or code leading to
start of a new thread is modified. This modification is shown in the following program seg-
ment.
In the above program segment. the initial block on line J is self-seeded on line 4, and
each thread started by the fork statement is also self-seeded with the first statement of that
thread. In this new program, un-commenting lines 2 and 5 will not change the values pro-
Random Stability 263
duced by the program. As such, this modified program using self-seeded threads exhibits full
thread stability.
Object Stability is achieved by creating a new random generation context for each
newly created object. This behavior is shown in the following exampl~:
In the above program, the initial block on line6 is assigned a seed value when it is cre-
ated. In addition, object p is assigned a new random generation context and initialized with a
seed. This seed is the next available random number in the context of the initial block. The
random value assigned to p.payload is dependent on the seed assigned to the context of object
p.
The above program will generate the same value for p.payload for consecutive runs of
the same program. Object stability of this program will, however, be disturbed if either lines
6 or 9 are un-commented. Un-commenting line 6 will change the seed value set for the initial
block, therefore changing the seed value for object p. Un-commenting line 9 will result in a
different seed value for object p. In both cases, a different value will be generated for p.pay-
load.
The solution to the above problem is to self-seed the object when it is created. The
updated program is shown below.
The above program will produce the same result for p.payload even if lines 9 or 12 are
un-commented.
It is not always practical or desired to self-seed every thread or every object. A good
policy to follow is to add new threads to the end of a program so that seeds assigned to the
existing threads do not change. This practice can significantly contribute to random stability.
SystemVerilog provides system functions and tasks for random number generation and con-
trol. These functions are described in the following subsections.
10.7.1 $urandom
The prototype for this function is:
function int unsigned $urandom [(int seed)l;
This function returns a new 32-bit unsigned random number every time it is called. The
seed argument is optional and determines the sequence of generated random numbers. This
function returns the same sequence of numbers when the same seed is used.
The $urandom function is different from the $random system function in that $uran-
dom returns unsigned integers and it is automatically thread stable.
10.7.2 $urandom_rangeO
The prototype for this function is:
function int unsigned $urandom_range(int unsigned maxval. int unsigned minval=O);
This function returns a new 32-bit unsigned random number within the specified range
of minval and maxval every time it is called. The minval argument can be omitted and defaults
to O. The maxval argument can be less than minval in which case, the program automatically
reverses the two argument. $urandom_rangeO function is automaticai1y thread stable.
10.7.3 srandomO
The prototype for this function is:
function void srandom(int seed)
This function initializes an object or thread's random number generator with the pro-
vided seed value. The following example shows the use of this function for a thread and for
an object.
In this example, the seed for random number generator of the initial thread is initialize
on line 13. This seed affects random values that are generated in the context of this thread.
The randomization of local scope variable local_rand (line 14) is affected by the seed set on
line 13. The statement on line 15 initializes the seed for the random number generator in the
packet object. The randomization of packet p (line 16) is affected by the seed set on line 15.
266 Constrained Random Generation
CHAPTER 11 Data Modeling
Modem verification environments deal with data at different layers of abstractions. At the
lowest level, bits and bytes are assigned and read from physical wires in the environment. At
the highest level, objects encapsulating bundled values are routinely generated, manipulated,
and discarded throughout the environment. Dealing with data at transaction level leads to a
more productive and modular program, and also allows for operations related to that data
type to be packaged with its abstraction, thereby reducing the low-level complexity that has
to be managed.
Data can be created, duplicated, and discarded during simulation runtime. And it can, as
it usually does, travel from object to object. These common behaviors across data objects
suggests that a structured approach for creating and managing complex data objects can lead
to benefits in productivity and reducing the potential for mistakes in data manipulations.
Data in a verification environment can be put into two broad categories:
• Composite data objects
• Command and status (or actions and reactions)
Composite data objects represent a collection of interrelated data values that usually
represent objects in a standard or proprietary protocol (i.e., an Ethernet packet). Commands
are used to instruct a module to perform a certain activity (e.g., instructing a memory driver
to read a memory location). Status is used to provide feedback to the object issuing a com-
mand. Note that as with data packets, commands and status are generated, used, discarded,
and must travel (e.g., command traveling from a sequencer to a driver, or status moving from
monitor where it is observed to the scoreboard). As such, it is natural to model commands
and status similar to data that represents packets and signal values.
Ultimately, all abstract data types (i.e., command, status, or data objects) that must
travel through a DUV at the register transfer level must be transformed to its physical form.
For example, a data object representing an Ethernet packet should be translated into a bit
stream to be sent into a DUV port or a bit stream collected from an Ethemet port must be
translated into an abstract representation of an Ethernet packet. Packing and unpacking tech-
niques are used to control translation between abstract and physical forms.
268 Data Modeling
This chapter introduces a general view of data modeling and discusses issues related to
generation, and its related manipUlations.
9 initial begin
10 my-packet mp;
11 mp = new;
12 for(int i=O; i<10; i++)
13 assert(mp.randomizeO with {field1 ==1 O;});
14 end
15 end module
In the above example, the constraint solver assigns values to fleld 2 and field 3 in order to
satisfy the constraints specified on lines 6 and 13.
11.2- - -Data
-----.
Model Fields and Constraints
----------------- - - -
The abstraction implemented in a data model is achieved through its set of fields (i.e., prop-
erties) and constraints describing relationships between these fields. These topics are
described in the following subsections.
Figure 11.1 shows the structure of an Ethernet packet. In keeping with the general
guidelines of hiding data model details, a model of an Ethernet packet should include the fol-
lowing features and utility methods:
• Physical fields for:
• Source address
• Destination address
• Size
• Payload
• crc
• Virtual fields used during generation:
• Generate short, valid size, or long packet
• Generate valid or invalid crc
• Payload size to be generated (derived from valid)
• Virtual fields set during packet collection from DUT, indicating:
• If the collected packet was short, valid size, or long
• Ifthe collected packet had a valid or invalid crc when it was collected
• Whether a collision was detected when receiving this packet
• Payload size of the collected packet
• Utility methods to:
• Unpack a bit array into the packet structure
• Pack the physical fields and return a bit array
• Methods to copy, print, compare, record, and clone packets
• Random Generation facilities to allow for randomly setting the packet fields
Note that a data object is either generated and transmitted or is received from the DUV,
and as such, it is common practice to use the same virtual fields for both generation and col-
lection (i.e., use valid_ere to indicate if a valid ere should be generated during generation, and
to set the same field to true or false when the packet is received from the DUV).
SIZED
The following program shows a typical implementation for the payload portion of an
Ethernet data packet, showing only the fields.
The physical fields in this implementation are tag_info and data. The virtual fields are
legal_size, Itype,and data_size. These three virtual fields are included in order to provide the
ability to control the size of generated payload at different levels of granularity. Field
legal_size is used to specify if the payload size should be legal or not, field Itype is used to
specify payload size in general terms as indicated by enum on line 2, and field data_size is
used to specify exact size of payload that should be generated.
The real challenge in implementing this data model is that all virtual and physical fields
must be set to their appropriate values when any of the other virtual of physical fields are
constrained using an in-line constraint when the object is being randomized. For example, if
data_size is set to 20 as a constraint when this data model is generated, then legal_size should
be set to FALSE, and Itype should be set to TOO_SHORT by the random generator. Alterna-
tively, if legal_size is constrained to FALSE during generation, then Itype should be assigned
one of TOO_SHORT or TOO_LONG by the random generator and data_size should match the
range defined by the setting of Itype. This goal is achieved through appropriate inclusion of
constraints in this data model. Data model constraints are described in the following section.
'Program 11.3: Using randomization constraints for computing data model fields
1 typedef enum bit {FALSE, TRUE} bool;
2 typedef en urn {TOO_SHORT, SHORT, MEDIUM, LONG. TOO_LONG} ethJength_t;
3
4 class ethernet_frame_sized_payload;
5 rand boollegaLsize;
272 Data Modeling
In this example, A and B are Boolean conditions and constraints impl_constr and
equiv_constr represent the same constraint between A and B. As such, constraint block C1 in
the above example, requires that if virtual field legal_size is set to TRUE, then Itype should be
assigned to either SHORT, MEDIUM, or LONG, and vice versa (this is a bidirectional constraint).
It also requires that if virtual field legal_size is set to FALSE, then Itype is not set to SHORT,
MEDIUM, or LONG, which means Itype should be set to either TOO_SHORT or TOO_LONG. Note
that because of using an equivalence constraint, the value assigned to legal_size will always
reflect the value assigned to Itype and vice versa. As such, if Itype is constrained during ran-
domization using an in-line constraint, then legal_size is automatically set to a value reflect-
ing the setting for the constrained value of Itype, and if legal_size is constrained during
randomization, then Itype is automatically set to a value reflecting the setting for the con-
strained value of legal_size. Constraints C4-CS specify valid ranges for each setting of Itype.
Note that, again, if data_size is constrained using an in-line constraint during randomization,
then these constraint blocks will result in !type being assigned a value appropriate to the set-
ting for data_size, which in combination with constraint block C1 will set appropriate value
for field legal_size.
In summary, the provided set of constraints will result in a behavior where any of the
virtual fields or physical fields can be constrained using an in-line constraint during random-
ization and all other data fields will be assigned appropriately. This is shown in the following
example:
I
Program 11.4: Data model randomization constraints based on any of its fields
1 module topO;
2 'i nelude "program _11.3"
3
4 ethernel_frame_sized_payload efsp;
5
6 initial begin
7 efsp = new();
8 assert(efsp.randomize() with {legal_size == FALSE;});
Hierarchical Models 273
This example shows a hierarchical implementation where the Ethernet frame contains
the payload class defined in program 11.3. A hierarchical constraint block is used to specify
the relationship between the payload object and the class being defined here. Note that all
hierarchical constraints in this example are specified in the parent object. The reason for this
guideline is that class etherneUrame_sized_payload may potentially be used as a stand-alone
object as shown in example 11.4, but class etherneCsized_frame defined in this example can-
not be used without the payload object contained in it. The approach of placing all hierarchi-
cal constraints in the parent object allows the implementation of the lower level object to
remain independent of the parent object.
A constraint guard should be used when defining hierarchical constraints. In this exam-
ple, a constraint guard is not used since special code is put in place (line 28) to make sure that
sized_payload is never null when randomization is taking place.
In this example, ere is computed using a function call in a constraint block (line 22). The
reason field legal_size is passed to this function is that ere should be calculated only when all
other fields are assigned random values: Using a function call with legal_size as an argument
guarantees that ere is assigned only after legal_size is assigned a value which requires that all
other fields also be assigned values. An alternative approach would be to assign ere in the
poscrandomizeO function of this class.
I . .-\ language supporting multiple inheritance allo\\"s a derived class to inherit properties and methods
fwm more than one parent class.
Data Model Subtypes 275
• Class members defined using tagged unions cannot participate in constraint specifi-
cations.
The lack of support for multiple inheritance implies that if data subtypes are modeled as
derived classes of a parent class, then properties and methods that are present in most but not
all subtypes have to be explicitly and separately implemented for each subtype, including
them. This is not a scalable approach, since a change in any of these properties or methods
would mean that all relevant subtypes have to be modified to reflect the new change.
The inability to choose randomly between derived classes of a parent class implies that
if data subtypes are modeled as derived classes of a parent class, then creating a data object
requires advance knowledge of its SUbtype. Since randomization occurs after object creation,
then data subtype is fixed when data randomization takes place. This behavior leads to multi-
ple drawbacks in using derived classes for modeling data subtypes:
• If data object subtypes must be decided before object creation and randomization,
then they cannot be decided based on randomly generated values for other fields of
that data object.
• When creating data objects, its subtype must be explicitly decided even if data sub-
type is not a focus of verification.
• Subtype randomization cannot be packaged inside a class object. The reason is that
the subtype for a data object is already decided by the time randomization is taking
place, and therefore, any special requirements for generating different subtypes with
different probabilities should be implemented outside the class describing the data
model.
Given the limitations of using derived classes in modeling data subtypes, it is best to
avoid using derived classes and use one class to represent all subtypes. In this case, subtypes
can be modeled using either a flat or a hierarchical approach. In a flat approach, all fields for
all subtypes are explicitly included in the class declaration of the base model. In a hierarchi-
cal approach, fields specific to each subtype are modeled with an independent class declara-
tion, which is then instantiated inside the class declaration of the base model. A hierarchical
approach leads to a modular and structured model but it has the disadvantage that constraint
guards should be used to deal with potentially null object pointers. A flat modeling approach
has the advantage of not needing constraint guards but tends to become difficult to manage
for anything but the simplest models.
The following guidelines should be followed when modeling a sub-typed data model:
• Use one class definition to represent all subtypes of a data model.
• Include a virtual field having an enumerated type indicating the SUbtype.
• Define constraints that dictate which fields are affected by the subtype field.
• Implement subtype fields either as a flat or hierarchical model.
• Do not use tagged unions in data models.
Figure 11.2 shows two subtype definitions for an Ethernet packet. Class declaration for
etherneUrame_sized_payload is shown in program 11.3. The following program shows the
implementation for etherneUrame_qtagged_payload. The implementation of the QTAGGED
Ethernet packet subtype payload follows the same guidelines as the one described for the
SIZED Ethernet packet SUbtype.
276 Data Modeling
Given class declarations for SIZED and QTAGGED Ethernet packet SUbtype payloads
shown in programs 11.3 and 11.6, respectively, Hierarchical Implementation of an Ethernet
packet is shown below.
Only the beginning part of the implementation is shown in this example. Enumerated
type eth_pkCgen_t is defined as interesting combinations of packet length and crc value.
Constraint blocks C1 and C2 specify the relationship between coordinated range parameter
P9_type and values for Itype and legal_crc. Note that if pg_type is not constrained using an
in-line constraint during randomization, then its value will be assigned according to ran-
domly generated values for Itype and legal_crc, and as such can be used as a field to quickly
identify the specific coordinated range that was generated during randomization. Also note
that if the combination ofltype and legal_crc does not correspond to those in constraint blocks
C1 and C2, then pg_type will be assigned to OTHER since assigning it to any other value would
violate constraint blocks (;1 and C2, and OTHER is the only remaining possible value.
4 end class
5
6 ethernet_frame e;
7
8 initial begin
9 e = new;
280 Data Modeling
10 assert(e.randomize());
11 e.default_setting.constraint_mode(O); IIturn off defaulCsetting constraint
12 assert(e.randomlzeO with {
13 legal_crc == FALSE;
14 ptype == SIZED;
15 });
16 end
17 endmodule
This example shows the Ethernet frame model with an additional constraint block (line
3) which is used to indicate that in the absence of any constraints, short packets with good
crc should be generated. However, this default behavior can be disabled using a
constrainC modeO fimction call (line 11) and the default behavior changed (lines 12-15).
The modeling of data and commands as objects is the fundamental requirement for
transaction level modeling ofacti,ity in a \'erification environment (chapter 9).
281
PART 5
Environment Implementation
and Scenario Generation
1:82
CHAPTER 12 Module-Based VE
Implementation
Agent DUV
Sequencer Driver
I II I
0 0-
Monitor
I I
/\
elK I
Figure 12.1 D-FF Verification Environment
Consider the D-FF and its verification environment shown in figure 12.1. The imple-
mentation of this DUV is shown in the following program.
~-------------------
Program 12.1: Implementation of a OFF and its interface wrappers
1 module dff(input byte unsigned 0, input bit elk, output byte unsigned 0);
2 always @(posedge clk)
3 0<= D;
4 endmodule
5
6 interface dffjf(input bit clk);
7 byte unsigned 0;
8 byte unsigned a;
9 endinterface
10
11 module dfCwrapper(dffjf phyjf, input bit clk);
12 dff dffi(phy_if.D. clk. phy if. a);
13 : end module -
L-
The first step in building the verification environment for this component is to define an
interface block and define a wrapper that allows other components to interact with this DUV
Module-Based Implementation Overview 285
by using the interface block. In the context of a verification environment, this interface block
is considered the physical interface.
The verification environment architecture for this design (figure 12.1) consists of a
sequencer, a driver, and a monitor that simply prints the values collected from the physical
interface.
Module-based implementation of this verification environment is shown in the follow-
ing program. It should be emphasized that this implementation does not strictly follow the
guidelines for implementing a verification environment (e.g., using ports to communicate
between drivers and sequencer components, etc.). The focus in this example is to highlight
the structure of a module-based implementation of a verification environment.
48 dfCmodule_based_test dfUeslO;
49 agent ve_agent(dffJf);
50
51 always #5 elk = -elk;
52 endmodule
Figure 12.2 shows the interface view of the XBar crossbar switch. This design consists of
four receive and transmit ports.
I
I
data[3:0] - - . . - - . . data[3:0]
adc\r[3:0] - - . .
XBar - - . . addr[3:0]
inpv - - . . Rev XMT - - . . Dutv
inpa~ ~Dutr
The receive port protocol is as follows: the device samples inputs and updates the out-
put on the negative edge of the clock. Inputs are applied to the device at the positive edge of
clock. If inpv is not set (current input is not valid), then new input can be applied at any time
(inpv should be set when data is applied). If inpv is set then data at the receive port can be
changed only when inpa is set to 1 (meaning device has read the input signals data and addr).
The transmit port protocol is as follows: the device transfers data from a receive port to
the transmit port indicated on the addr line of the receive port. It sets the transmit port signal
addr to the number of the receive port where the data was received. Device updates transmit
port outputs on the negative edge of the clock. If outv is not set, then device can write to the
output at any time. If outv is set, then device can write to an output port if outr signal is set
(meaning last data applied was read by the outside agent).
In defining the XBar sample design, the goal has been to keep the low-le\'el details (i.e.,
protocol complexity) to a minimum while including features that make the structural compo-
288 Module-Based VE Implementation
sition of its verification environment and sequence type requirements a super-set of what
needs to be covered in a majority of designs. These features include:
• Interacting with multiple design ports that use the same protocol, therefore needing
multiple agents per interface verification component.
• DUV receive port requiring active interaction with the verification environment for
receiving an input (therefore, requiring a master transactor).
• DUV transmit port can be set to operate with no verification environment interaction
(by setting outr signals to a constant 1) and could also be driven by a slave transactor
that manipulates signal outr.
• Ability to verify the XBar design through independently running random stimulus
generators at each port, therefore allowing this design to be verified by using only
interface verification components. An extension to this design, described in section
14.1, is used to show the implementation of hierarchical sequences that require an
environment containing module and system verification components.
Component xbar_sve represents the top level of the verification environment hierarchy.
Component xbaUest implements the specific scenario that is carried out by the infrastructure
provided by xbar_sve. Both xbar_sve and xbaUest are placed inside component
xbar_testbench. The implementation of the verification environment architecture shown in
figure 12.4 is described in the remainder of this chapter.
DUVWrapper 289
xbaUestbench
xbaUest
XBar
I Sequencer I
I Sequencer
The XBar design wrapper, implemented using the receive, transmit, and internal inter-
face blocks, is shown in the following program:
Components in the verification environment interact with the DUV only through this
wrapper and interface blocks exposed by its ports. Each interface block contains a set of
helper functions to read and write values to the DUV port.
A library package can be defined to hold type declarations, constants specific to a given
implementation, and global functions and tasks. The package file for the XBar example is
shown below:
I .
Program 12.6: XBar Package Declaration
1 package xbar_pkg:
2 'include "ovm.svh"
3
4 typedef enum bit {FALSE. TRUE} bool:
5
6 class xbar packet extends ovm transaction:
7 IlloQlcal fields -
8 rand int unsigned wait_cycle;
9 rand int port;
10
11 /I physical fields
L.ibrary Package 291
This package first includes the header file for the OVM class library (line 2). Including
this file allows all features of the OYM class library to be available when package xbar_pkg is
loaded. This package includes the declaration of special data type bool (line 4) which can be
used by importing this package. This package also contains the declaration of XBar packet
data type. This packet is described in section 12.6.
The packet declaration also includes helper functions that can be used throughout the
verification environment. Function prinC2_packetsO (lines 36-38) is one such function that
prints two packets.
This package can be used by importing it into all modules making use of its contents.
Alternatively, the name resolution operator can be used to directly access the contents of the
package (e.g., xbar_pkg::xbar_packet). Importing the package into each module instead of into
the global space is a better approach since modules in the same file may use different pack-
ages. This usage is shown in the example below:
The implementation of the XBar data packet xbar_packet is shown in program 12.6
(lines 6-34). Class xbar_packet is used for both transmitted and received packets. This model
contains logical fields walCcycle and port, representing the number of cycles to wait before
applying a packet to the DUV port and the port number to which a packet belongs. This
model also includes physical fields data, src_addr, and dest_addr. A set of named constraints
are used to define legal behavior for this model (lines 16-20). The name of each constraint
can be used to disable that constraint during simulation runtime.
Class xbar_packet is derived from base class ovm_transactlon (line 4) which provides all
infrastructure utilities of class ovm_transaction (e.g., copyO, clone(), print(), compareO) to class
xbar_packet. Field automation macros are used to indicate which fields are included in the
operation ofeach of these utilities (lines 22-28). For example, field walt_cycle is included in
all utility functions (e.g., waiCcycle is printed when function printO ofxbar-packet is called)
except compare operation (line 26). Field automation feature of the OVM class library is
described in detail in section 6.4.
Physical connections are used for communicating with the device physical ports. Proce-
dural interfaces are used for transaction based communication between two verification envi-
ronment components where communication is handled through procedure calls. The use of a
procedural interface allows each module interacting through this procedural interface to be
developed without the other module being available. The reason for this flexibility is that
each module interacts with only the procedural interface and is not aware of the other module
being attached to this interface. Figure 12.4 shows a pictorial view of physical and proce-
dural connections between modules using interface blocks.
The implementation of the interface block used for connecting to the XBar design is
shown in section 12.2. It is important that all interactions with the DUV take place through
such physical interface blocks. This approach provides a clean boundary between the design
and the verification environment.
module module
DUV
Verification
Component1
module
Verification
Component2
nents. In the module-based implementation approach, the facilities that provide transaction
based communication should be implemented in an interface block.
The following program shows the procedural interface block used for communication
between the sequencer and driver in the verification environment. This implementation is
used for both receive and transmit agents and is used to pass an XBar data packet from the
sequencer to the driver. As will be shown, on the transmit side of the verification environ-
ment (i.e., receive port of DUV), a packet is used to indicate the transfer content. On the
receive side of the verification environment (i.e., DUV transmit port), a packet sent to the
driver indicates the number of wait cycles before a packet should be collected from the DUV
transmit port.
23 put_pkUeady = 0;
24 =
pkt put_pkt;
25 endtask
26
27 function automatic void done(input xbar_packet pkt);
28 $cast(done_pkt, pkt.clone());
29 -> done_pkUeady;
30 endfunction
31 end interface: xbar_driveUf
The above program shows the implementation of the passive transaction channel inter-
face xbar_driver_if shown in figure 9.6 using an interface block. The methods supported by
this interface allow the transaction producer to perfonn a blocking put into the channel and
the transaction consumer (i.e., driver) to perform a blocking get from the channel. In addi-
tion, this implementation allows the transaction producer to synchronize with the consumer
by waiting until the last transaction placed into the channel is consumed (line 16). The inter-
action canied through this interface block consists of the following steps:
• Producer places a packet into the interface by calling function put(). The interaction
implemented here guarantees that the last packet is consumed by the time this func-
tion is called again, and therefore the packet can be placed in the interface immedi-
ately.
• A consumer calls function get(). This function blocks until a packet becomes avail-
able (line 22). Meanwhile, the producer is blocked, waiting for the processing of this
packet to be completed by the consumer (line 16).
• Upon processing the packet, the consumer calls function done(), triggering event
done.JlkUeady (line 29).
• Triggering event done_pkUeady allows function put() to continue beyond line 16.
Before returning, the packet returned by the consumer is copied into inout argument
pkt of function put(). This mechanism allows the producer to receive a reply packet
from the consumer in return to the packet it had placed inside the interface.
Note that method put() is a time consuming task, and therefore, uses a semaphore to cre-
ate mutual exclusion among possibly multiple callers to this task. This means that no new
packet is placed into the transaction interface until the packet that is already placed in the
interface is consumed.
Event interfaces are used as a means of event based communication between verification
environment components. For example, event interfaces are used by a monitor to broadcast
information about conditions that are being monitored. Other components in the environ-
ment can track events defined in an event interface to perform tasks related to conditions
flagged by these events. An event interface is not as powerful as a transaction interface. but
is simple to implement and easy to understand. Module-based implementation of XBar
agents makes use of the following event interfaces implemented using interface blocks:
Building the Hierarchy 295
The transmit agent event interface (lines 1-8) contains event InpuUo_duv_collected
which is emitted by the transmit agent monitor indicating that an input to DUV was collected
(defined as when a packet applied to the input is accepted by the DUV). In this case, the
monitor sets xbar_xmt"pkt to the packet collected from the DUV receive port.
Receive agent event interface (lines 10-18) contains event duv_valld_outpuCobserved
emitted by the receive agent monitor indicating that the DUV has produced a valid output,
and event outpuUrom_duv_collected emitted by the receive agent monitor indicating that a
packet was collected from the DUV output (defined as when the receive driver accepts the
packet). In both cases, xbaucv_pkt is set to the observed packet.
The internal engines of both receive and transmit agents (e.g., sequencer) start their
operation when event start_all is detected and stop when event stop_all is detected. These
events are used as a means of controlling verification start and stop time.
As will be shown in the following sections, these event interfaces will be used by both
the sequencer and the scoreboarding processes.
Now that XBar package and all interfaces have been defined, the hierarchy of the verifica-
tion environment can be built. In this section, the focus is on how verification environment
components are declared, instantiated, and connected. The core function of each component
is described in the following sections.
The implementation of the hierarchy for the receive agent (figure 12.3) is shown in the
following program:
4 endmodule: xbar_rcv_driver
5
6 module xbar_rcv_monitor(interface phyjf, interface event_if);
7 import xbar_pkg::*;
8 parameter int PORT_NUM = 0;
9 endmodule: xbar_rcv_monitor
10
11 module xbar_rcv_sequencer(interface driver_if, Interface evenUf);
12 import xbar_pkg::*;
13 =
parameter int DEF_COUNT OVM_UNDEF;
14 parameter int PORT NUM 0; =
15 parameter int MAXj~ANDOM_SEQS 10; =
16 endmodule: xbar_rcv_sequencer
17
18 module xbaucv_agent(interface phyjf);
19 importxbar_pkg::*;
20 parameter ovm_active_passive_enum AP _TYPE = OVM_ACTIVE;
21 parameter Int PORT_NUM = 0;
22 parameter int DEF_COUNT = 0;
23
24 xbacrcv_agenCevenUf evenUfO;
25 xbaucv_monitor #(.PORT_NUM(PORT_NUM)) rcv_monitor(phy-if, evenUf);
26
27 1/ instantiate driver, driveUf, and sequence driver If active
28 generate
29 if (AP_TYPE == OVM_ACTlVE) begin: active
30 xbar_drlveUf driverjf();
31 xbaucv_driver #(.PORT_NUM(PORT_NUM))
32 driver(phyjf, driveUf, evenUf);
33 xbaucv_sequencer #(.PORT_NUM(PORT_NUM),
34 .DEF_COUNT(DEF_COUNT)) sequencer(driveUf, evenUf);
35 end
36 endgenerate
37 endmodule: xbar_rcv_agent
22 ) xbaUvc_env3 (.xbar_ve_rcvi(xbar_ve_rcvi),.xbar_ve_xmti(xbar_ve_xmtill;
23 end module: xbar_sve
This component contains four instances of xbar_ivc_env module (lines 8-22) and an
instance ofa scoreboard defined to have four ports (line 5). For each instance ofxbaUvc_env,
the port number, the active status, and the default count for receive and transmit sequencers
are specified as parameters. Note that the default receive is set to a very large number since
by default we would expect the environment to receive any number of packets transmitted by
DUV. The default transmit count is set to OVM_UNDEF, which as will be shown, indicates that
a random number of packets should be transmitted on that port.
Component xbar_sve implemented above is the top level of the verification environment
hierarchy. The use of this module to create the complete testbench is shown below:
I Program 12.13: Testbeneh top level implementation
1 module xbaUestbench();
2 import xbar_pkg::*;
3
4 bit reset = 0;
5 bit clk = 0;
6
7 xbar_duvymUfxbar_duv_xmt/(reset, clk);
8 xbar_duv_rcvJf xbar_duv_revi(reset, clk);
9 xbar_duvJnternaUf xbar_duvJnti(reset, elk);
~O
11 xbar_duv_wrapper xbacduvw(.*);
12 xbar_sve xbar_sve(.xbar_ve_rcvi(xbar_duv_xmti),
13 .xbar_veJnti(xbar_duv_inti), .xbar_ve_xmti(xbar_duv_rcvi));
14 xbaUest xbaUest(.*);
15
16 always #5 clk =-clk;
17
18 initial begin
19 #0 reset = 1'b1;
20 #51 reset =1'bO;
21 end
22 endmodule: xbaUestbeneh
12.10 Monitor
-_. ,.-.-.. --....- - - - - - - -______.___________....,",___ .__ ,__ .____ ..._. __ .__ ..__.__ . ______ ._u_ .__
Receive and transmit monitors of the verification environment track signal activity on the
transmit and receive ports of the DUV respectively. These monitors serve the following pur-
poses:
• Marking DUV port status through events
Monitor 299
The monitor components depend on only DUV port values, and are independent of
other components in the agent. This independence allows monitors to be used in passive
agents that do not contain a driver or a sequencer.
Task rcv_monitor_main_loop() (lines 10-24) is the start point of the main loop for passive
monitoring of the DUV port connected through port phLif. The main loop of the monitor
performs a number of operations:
• It identifies when valid data is placed on the DUV output. Upon detecting this condi-
tion, it sets xbar_rcv_pkt in the event interface event_if to information collected from
300 Module·Based VE Implementation
the physical interface (data and address fields) and emits duv_valid_outpuCobsarved in
avenUf (lines 16, 17). This information is used by the receive sequencer to set tield
walt_cycle in the next generated packet, according to the source address of the incom·
ing packet.
• It identifies when DUV output is read, as signified by assertion of outpr signal (line
19). Upon detecting this condition, it sets xbaucvJ)kt in evant_if and emits event
outpuCfrom_duv_coliected in avenUf (lines 21, 22).
• It emits event cov_pkCcoliectad which is used by the covergroup (Jines 26-30) to col-
lect coverage on the collected packet.
The implementation of this monitor is self contained and depends on only its module
ports. As will be shown, events emitted in evenUf are used by the receive sequencer to syn-
chronize the generation of next packet (used only for passing value of field walt_cycle) to be
passed to the receive driver.
The implementation of the transmit agent monitor is similar in structure and style to
receive agent monitor and is not shown in this text.
12.11 Driver
Transmit and receive drivers interact with transmit and receive sequencers, respectively, to
transmit and receive packets. The implementation of receive driver is shown in the program
below:
29 forever get_packet_and_coliectO;
30 end
31 : endmodule: xbar rCIi driver
L - -
As shown in the above implementation, the receive driver interacts with phy_if (physical
port), driveUf (transaction interface), and evenUf (event interface). It uses driver_if to receive
transactions from the sequencer connected to the other end of this port (line 8) by doing a
blocking get, and indicate that transfer is completed (line 10). The value offield wait_cycle in
the packet received from the sequencer is used to determine the number of cycles to wait
before a packet read is acknowledged on the DUV interface (line 15). Receive driver uses
phy_if to collect the packet from the DUV transmit port. The event interface evenUf is passed
to the driver as a general guideline and is not used in this implementation. The important
observation about this implementation is that because of the use of different types of inter-
faces, the implementation of this driver is completely self contained. The implementation of
the transmit agent driver is similar in structure and style to receive agent driver and is not
shown in this text.
12.12 Sequencer
Receive and transmit sequencers are used to generate and pass packets to receive and trans-
mit drivers, respectively. Sequencers form the core of scenario generation utilities, and as
such, should provide a rich set of features that allow for all scenarios to be generated.
The implementation of a sequencer is divided into four aspects:
• Structure and initialization
• Sequencer main loop
• Item generation
• Sequencer activation
The following program shows the implementation of the XBar receive sequencer using
a module block. This implementation highlights the initialization of the sequencer using
module parameters.
XBar receive sequencer has ports driver_if connected to the receive driver and event_if
connected to all subcomponents of the receive agent. Module parameters (lines 3-5) are used
to initialize this sequencer.
The following program shows the implementation of the main generation methods of
this sequencer:
Task mainO (lines 7-16) is the start point of generation activity and is called when this
sequencer is activated (program 12.19). Variable count is used to decide how many packets
should be generated. The value of count is set to a random value if it is initially set to
OVM_UNDEF. Function seCcountO is also provided so that the value of count can be set by a
top level testcase component before the sequencer is activated. Task mainO uses a randcase
statement to randomly choose between functions simpleO and reactive_recelveO. These func-
tions, shown in the next program, generate packets and pass the generated packet to the
driver through interface driver_if.
The implementation of packet generation methods are shown in the following program:
The above implementation is included in the module block that implements the receive
sequencer (program 12.16 line 11). This means that the above declarations exist in the body
of the sequencer module block.
This implementation includes field template_pkt which is an instance of the default item
type xbar,J)8cket that is to be generated by this sequencer (line 1). This default item will be
initialized before the sequencer starts and only if it has not already been initialized by the tes-
tcase that starts this sequencer (program 12.19 line 4). Overriding the default behavior of this
sequencer is achieved by assigning template_Item to a class object derived from the default
type xbar_packet. For example, class xbar_packeCnew can be derived from xbar_packet con-
taining a new constraint block. Because of polymorphism, assigning template_item to an
object of type xbar_packeCnew and randomizing templateJtem produces results matching the
additional constraint block specified for class xbar_packeCnew. Note that in this implementa-
tion approach, the default sequence item type for this sequencer is indicated through the
object type that is stored in template_item, and losing this object means losing information of
what sequence type should be generated. This consideration plays a role in the how sequence
generation methods are implemented.
Function override_template_itemO (lines 3-5) is provided as a means of overriding the
default item type through class polymorphism (section 4.7.4). The use of this function is
described in section 12.15.
Function simp leO (lines 7-14) operates by first creating a clone of template item
template_pkt (line 9). Working on a cloned copy of template_pkt is mandatory since a
sequencer may have multiple concurrently running threads that use template_pkt as a template
item and as such, template_pkt should not be used to carry out operations specific to anyone
thread. Function cloneO used in this step is a predefined function of class ovm_transaction
which is the parent class of xbar_packet. The use of this function guarantees that the cloned
object created on line 9 and assigned to rpkt, has the true type of object stored in template_pkt
even when this type is one derived from xbar_packet. Note that because of polymorphism,
randomizing rpkt produces a result that satisfies any constraint specified for the true type of
template_pkt object. Function putO overwrites the value of its argument, therefore after return-
ing from call to put(), the object pointed to by rpkt is the object returned by the consumer.
Function reactive_receiveO (lines 16-28) interacts with the receive monitor to decide the
value of walt_cycle depending on the source address of the packet that is currently arriving at
the DUV transmit port connected to this receive agent. The implementation of this function
is similar to function simpleO, except that after randomizing rpkt, the sequencer waits for
event duv_valid_outpuCobserved in event interface evenUf to be emitted by the monitor, and
then sets the yalue ofwalCcycle to the data field of the packet currently being received on the
DUV pon (line 26). It then passes rpkt to the consumer by calling function put() of driver_if
(line 28).
304 Module-Based VE Implementation
The following program shows the activation mechanism for this sequencer:
Start and stop of this sequencer is controlled by events start_all and stop_all defined in its
event interface event_if. The always loop is entered when event start_all is emitted (line 1).
This component first creates an object for template item template_pkt if it is not already
assigned explicitly by calling function override_template_item() (program 12.19 line 3). Note
that the DVM factory is used to create this object, and therefore, type override mechanisms
(section 6.3.1) can be used to modify the default behavior of this sequencer by changing the
object type that is created in this step. This component then executes task mainO, the
sequencer main loop (lines 6-9) in parallel with a thread waiting the occurrence of event
stop_all (lines 10-13) by using afork-join_any statement. If event stop_all is emitted before
task mainO is completed, then the thread of task mainO is killed. Before completing this
always block, field template_pkt is se to null so that factory override can take effect for the
next activation of this sequencer. The always block is restarted at the next occurrence of
event start_all.
Sequence generation in a module-based implementation has limited flexibility. Adding
new behaviors requires that new functions be added to this module, and the weight of rand-
case statement in task reactive_receiveO to be changed. Generating multi-sided scenarios (sec-
tion 8.1) is also challenging when using modules to implement sequencers. Chapter 14
presents the use of the OVM class library for implementing sequencers using class objects
while addressing these sequence generation challenges.
Scoreboards check for correct data transfer between DUV ports. The following program
shows the scoreboard interface methods for the XBar verification environment:
r=:---
Program 12.20: XBar scoreboard implementation
1 module xbar_scoreboardO;
2 import xbar_pkg::*;
3 parameter int NUM_PORTS = 1;
4
Scoreboardlng 305
The internal implementation of the scoreboard is not shown. The scoreboard, however,
creates an array of scoreboards, one for each port. Inserting a packet into the scoreboard
places the packet in the scoreboard for its destination address. Matching a packet to the
scoreboard, matches that packet against its destination address as well.
The scoreboard for XBar design is instantiated in xbar_sve (program 12.12 line 5). This
is the scope that contains agent instances for all four ports of the design and as such, monitor
components for all four agents can be accessed. The following program shows the content of
file included in program 12.12 at line 6 for connecting the scoreboard to the receive and
transmit monitors inside each agent:
As shown in this program, events in event interfaces of receive and transmit agents are
tracked and their collected packet is either inserted into the scoreboard (for transmit agents)
or matched against the scoreboard (for receive agents).
The implementation of score boarding in this design is such that the same scoreboarding
implementation continues to check for correct packet transfer when the agent components
are changed from active to passive and the packets are generated by other components in the
DUV and not by the verification environment. The reason is that this scoreboarding depends
on only the monitor and event interface implementations which are present in passive agents.
306 Module-Based VE Implementation
Module xbaUest shown above is instantiated in the top level testbench. Upon entering
the initial block, the test component waits for the global reset signal to get deactivated (line
5). It then calls function startO of the xbar_sve component which in turn emits event start_all
in the event interface blocks of all agents. All sequencers are activated after this function is
called. Transmit sequencers generate a random number of packets since their count variable
is initialized to OVM_UNDEF, and receive sequencers receive up to 1,000,000 packets. The test
component shown above waits for 10,000 time units after starting the environment and then
ends the simulation by calling function $finish (line 8).
A test component can also be used to create a directed test. A directed-test is shown in
the following example:
In tl1e first step. the above directed test disables the sequencers in all transmit agents
(line 5) by calling function disable_all_xmCagent_sequencersO of xbar_sve. This function sets
\"ariable count of all transmit sequencers to 0 by calling the seCcountO function of all transmit
sequencers (program l2.l7). The test then \\'aits for the global reset to be deactivated (line 6)
and then starts the verification environment (line 7). Next. this test sends two packets from
XBar port 0 to XBar port 1 by calling function xmt_directed_packetO of xbar_sve component.
Modifying Sequencer Default Behavior 307
This function in tum calls a function in the transmit sequencer for port 0 to send 2 packets to
destination port 1.
The default behavior of a sequencer can be changed through one of these mechanisms:
• Adding new functions for generating specific sequences of items .
• Calling function overrlde_template_ltem() of the sequencer to explicitly override its
default sequence item type before sequencer is started.
• Using type override for the OVM factory to implicitly modify the object type created
by the OVM factory for template_pkt when the sequencer is being started.
Functions slmple() and reactive_receive() (section 12.12) show examples of how new
packet generation capabilities can be added to this sequencer. The same approach can be
used for adding new functions that add new generation behaviors to the sequencer.
Both explicit and implicit modification of the sequencer default item require that a new
class type be derived from the default sequence item of a sequencer. In the explicit approach,
the default sequence item of a sequencer is replaced with an instance of the new class type by
calling function override_template_itemO. In the implicit approach, a type override from the
default sequence item to the newly defined class type is specified for the DVM factory
before the sequencer is activated.
The following program shows the implementation of a testcase that explicitly and
implicitly modifies the sequencers in the XBar verification environment:
The above implementation shows the definition of class xbar_packeCwait_5 (lines 4-7)
and class xbar_packet_waiCB (lines 9-12) which are derived from class xbar_packet. Note that
macro ovm_objecCutils() is specified for both these classes so that they are handled correctly
by the GVM factory and the predefined functions provided by the GVM class library (e.g.,
clone()). The test is then implemented by first creating class object pkta of type
xbar_packet_waiC8 (line 16), and then using function ivc_overrideJcv_template_item() to
explicitly set the default sequence item for receive sequencers at ports 2 and 3 to pkta. Func-
tion ivc_override_rcv_template_item() is implemented as part of module xbar_sve and in tum
calls function override_template_item() of the sequencer at the port identified by its first argu-
ment. A type override is specified for the GVM factory to generate an object of type
xbar_packeCwait_5 anytime an object of type xbar_packet is requested (line 21). This override
causes the default sequence item for all sequencers whose default sequence types are not
explicitly initialized to be set to an object oftype xbar_packet_wait_5 (see program 12.19 lines
3--4 for the implementation that achieves this behavior). With this configuration, receive
sequencers at ports 2 and 3 use type xbar_packeCwait_B as their default sequence item type,
and receive sequencers at ports 0 and 1 and all transmit sequencers use class type
xbar_packeCwaiC5 as their default sequence item type. All sequencers are then started by
calling function xbar_sve.start() of xbar_sve which in tum starts the sequencers in all receive
and transmit agents.
CHAPTER 13 Class-Based VE
Implementation
13.1 Class-Based
- - -
Implementation
-- ... _
. -- ---------_. . ..
...... -
Overview---.
...
providing this example is not on following correct methodology but on providing a clear
overview of elements involved in a class-based implementation. The class-based implemen-
tation shown in this example should be contrasted with the module-based implementation of
the same example (section 12.1). The implementation of a verification environment follow-
ing OVM guidelines is described in section 13.6.
The following program shows class-based implementation of the verification environ-
ment shown in figure 12.1:
52 end
53 endmodule
~---------------------
The major difference between this top level environment and the one for module-based
implementation is that in class-based implementation, the verification environment hierarchy
is modeled as a class hierarchy, where in a module-based implementation, the verification
environment hierarchy is modeled as a module hierarchy.
It should be noted that the class-based implementation shown above is only meant to
highlight the general structure of a class-based implementation, and this implementation
does not follow the recommended guidelines for implementing a verification environment.
For example, this implementation does not follow the component self-containment guideline
since in this example, each component makes method calls into other components in the
environment (e.g., the sequencer directly calling the drive() method of the driver).
The remainder of this chapter focuses on showing the implementation of the verifica-
tion environment for the XBar design using the OVM class library, and by following the
class-based implementation approach.
It is expected that data objects are generated by sequences. As such, classes represent-
ing a data model should be inherited from ovm_sequence_ltem (line 1). The implementation
above shows the use of begin/end variation of macro ovm_objecCutlls() to register class
xbar_packet with the OVM factory (line 7). In addition, field automation macros (section 6.4)
are specified to allow members of this data model to be managed by the OVM facilities
(lines 8-11). The constructor for this model allows for specifying a logical name for an
instance, if necessary.
Class-based implementation of a verification environment uses the System Veri log interface
construct to communicate between the verification environment and the DUV. In class-based
implementations a virtual interface· is used as a means of allowing classes to access a Sys-
temVerilog interface. This means that classes representing components in the verification
environment have a pointer to the physical interface to the DUV. This pointer is initialized to
point to the DUV interface when the verification environment hierarchy is built. The follow-
ing small program shows the use of a virtual interface as a field of a class object. This physi-
cal interface can then be accessed through this pointer:
12 task runO;
13 repeat (10) @(virtual_dif.clk) $display($time. virtual_dif.clk);
14 $finish;
15 endtask
16 end class
17
18 my-class sve;
19 initial begin
20 sve = newO;
21 sve. virtual dif = dif;
22 sve.run(); -
23 end
24 end module
25
26 module testbenchO;
27 duv_if mdifO;
28 duv duvi(mdif);
29 test test(mdif);
30 endmodule
The above example shows the declaration of class my_class containing virtual interface
virtual_dif (line 11). Once this virtual interface is initialized (line 21) to point to an instance of
an interface (line 27), it can be used in a class object for accessing the physical signals inside
that physical interface (lIne 13).
Event objects are used as a means of event-based synchronization between verification envi-
ronment components. OVM provides class ovm_event which can be used to exchange data
objects derived from ovm_object. The provider of the data object triggers the event while pro-
viding the data to be exchanged, and the consumer of the data object waits for the event to be
triggered and then collects the data object by calling a predefined method of the event object.
Event-based synchronization is similar to a non-blocking put and a blocking get operation by
the producer and consumer, respectively. As such, event-based synchronization provides a
very simple form of transaction-based synchronization. OVM also provides the predefined
class ovm_event_callback that can be used to initiate methods calls when an event is triggered
instead of passing data objects.
The use of event-based synchronization is shown in the following example program.
Note that this example does not strictly follow the OVM guidelines for creating an environ-
ment hierarchy (section 7.2) since the focus here is on how an event object are used for syn-
chronization purposes. The implementation of a verification environment following OVM
guidelines is described in section 13.6.
-----------_.--------------_._--
Building the Hierarchy 315
The above example defines classes monitor and target. Class monitor contains event
packeUeady of type ovm_event (line 11). The monitor triggers this event by passing
collected_packet, the packet currently collected in the monitor and derived from ovm_object,
to function trigger() of event object packeCready (line 22). Event packet_ready is allocated in
the constructor for class monitor (line 15). Class target contains event monitor_got_pkt of type
ovm_event. This event is used by first waiting for it to be triggered (line 37) and then collect-
ing the data from this event object (lines 38-39). Note that event monitor_got_pkt is allocated
inside the constructor for class target (line 32) even though it will be set to point to the event
object in the monitor during hierarchy construction.
The top level of this example instantiates monitor mntr (line 43) and target tgt (line 44).
It then overrides event pointer monitoUloCpkt in tgt by setting it to event packet_ready in mntr
(line 47) before running the test (line 48).
The implementation of target and monitor classes shown in this example are specially
defined to allow each component to operate even if the other component is not yet available,
hence achieving self-containment. This is accomplished by creating the event object in each
component in its constructor even though it is understood that the event pointer in the target
will be changed to point to the event object in the monitor when the hierarchy is being built.
Monitor
7-l
~
pkt_bdcst_port
analysis interface
b pkt_bdcst_port
I PhY_ifl
dW_,"'_~'"j><>' ~~ D"~, ~
This program defines class xbar_rcv_monitor derived from class ovm_monitor (line 1).
Field phy-if (line 2) will be initialized to point to the top level interface block that connects
with the transmit port of the DUV Field port_num (line 3) identifies DUV port to which this
monitor is attached. Field rcv_pkt (line 4) holds the latest packet collected from the DUV
transmit interface. Analysis port connector object pkt_bdcst_port (declared on line 9 and ini-
tialized on line 26) is used to broadcast packets collected from the DUV interface. Event
cov_pkCcollected (line 7) activates coverage group coy_packet (declared on lines 17-21 and
initialized on line 25).
Blocking peek interface duv_ouCvalid_imp (declared on line 10 and initialized on line
27) is provided to allow other components to synchronize with the beginning time of when a
new packet is placed by the DUV on its transmit port. Function peekO is defined to block
until event duv_ouUs_valid_event is emitted, and then to return a copy of rcv_pkt. Event
duv_ouUs_valid_event is emitted in task runO after the packet placed by the DUV on its trans-
mit POt1 is collected and stored in rcv_pkt (section 13.7). This interface is implemented as a
blocking interface so that outside components can synchronize by waiting for task peekO to
complete. Also, because of llsing a peek interface, multiple outside components can use this
interface to synchronize with this monitor.
318 Class-Based VE Implementation
Macros are added to this class declaration (lines 12-15) in order to register this class
and its field with the OVM factory. include predefined methods of OVM classes (line 12)
and to also allow fields in this class to participate in OVM field automation. No hierarchy
exists below this monitor component. As such, task bulldO of this class is not required to be
defined.
Class ovm_monltor does not implement protocol-specific behavior, and therefore, the
run phase behavior of this monitor should be implemented in the predefined task runO of
class ovm_monltor. The statement on line 35 includes that implementation (program 13.17) in
this class.
This program defines class xbar_rcv_sequencer derived from class ovm_sequencer (line
1). Field port_num (line 2) identifies the DUV port to which this sequencer is attached. Vir-
tual interface phy_if is included to allow sequences running in this sequencer to access the
DUV physical interface (line 3). Blocking peek interface duv_out_valid_port (declared on line
5 and initialized on line 14) will be connected to the blocking peek interface of the receive
monitor (program 13.6 line 10) when building the agent containing this sequencer (program
13.10). This interface is used with reactive sequences (section 14.6) to synchronize to the
time when a new packet appears on the DUV transmit pOli. No hierarchy exists below this
sequencer, therefore, task build() of this class is not defined in this program.
The begin/end variation of OVM macro ovm_sequencer_utils() (section 8.2.2) is used to
register class xbar_rcv_sequencer with the OVl\1 factory. and to also create the sequence
library container for this sequencer (line 7-9). Any sequence added to this sequencer by
Building the Hierarchy 319
using macro ovm_sequence_utils() (e.g., program 13.19 line 2) is placed in the sequence
I ibrary created in this step.
DVM macro ovm_update_sequence)ib_and_item() is used (line 13) to specify the default
sequence item for this sequencer and to also initialize the sequencer with the needed
sequencing infrastructure. The use of this macro is mandatory.
Note that the implementation of class xbar_rcv_sequencer does not include the sequence
item interface port seq_item_cons_if since this object is included by default in class
ovm_sequencer.
The transaction type generated by the sequencer has a type of xbar_packet. In the receive
direction, this packet contains parameter waiCcycle which instructs the slave driver on how
many cycles to wait before accepting a packet from the DUV. In the transmit direction, the
packet produced by the sequencer is transmitted by the driver into a DUV receive port.
The default behavior of the receive and transmit sequencers is to execute sequence
ovm_simple_sequence a number of times defined by field count, where each execution of this
sequence generates one sequence item whose type is given by the default sequence type of
the sequencer. This means that class xbar_packet cannot be used directly as the default
sequence item type for either the receive or the transmit sequencers if the default behavior of
the sequencers is to be used. The reason is that the default implementation of xbar_packet
(program 13.3) does not include information on which port it is being generated on. It is,
however, required that the source address of a packet generated by the transmit sequencer be
set to the local port number, and the destination address of a packet generated by the receive
sequencer to be set to the local port number. Because of this requirement, the default
sequence item type for the receive sequencer is set to class xbar_rcv_packet shown below:
Class xbar_rcv_packet is derived from xbar_packet and includes field por,-num. The goal
is to have the value of port_num set to the local port number before randomization of this
packet is started so that this field can be used to constrain the destination address of this
packet to the current port during randomization (line 6). Function pre_randomize() (section
10.5.3) is used to set the \'alue of field port_num before randomization is started. Function
geCsequencer() of class ovm_sequence_item (the parent class of xbar_packet) is used in
320 Class-Based VE Implementation
pre_randomlze() (line 14) to set field porcnum to the port number of the sequencer that is gen-
erating this packet type.
In the transmit direction, the default sequence item type is set to xbar_xmCpacket which
is implemented similar to xbarJcv_packet except that constraint on line 6 is replaced with a
constraint setting the value of src_addr to field porCnum.
This program defines class xbarJcv_driver derived from class ovm_driver (line 1). Field
port_num (line 2) identifies the DUV port to which this driver is attached. Field phyjf (line 4)
will be set to point to the top level interface block that connects with the transmit port of the
DUV. Sequence item producer interface seq_item_prod_if which is a default field of class
ovm_driver is used to receive sequence items from the sequencer.
Macros are added to this class declaration (lines 6-8) to include predefined methods of
the OVM classes (line 6) and to also allow fields in this class to participate in OVM field
automation (line 7). No hierarchy exists below this monitor component. As such, task bulidO
of this class is not redefined.
The run phase behavior of this monitor is implemented in the predefined task run() of
class ovm_drlver. The statement on line 35 includes that implementation (program 13.18) in
this class.
Building the Hierarchy 321
This program detines class xbar_rcv_agent derived from class ovm_agent (line 1). Field
is_active (line 2) indicates whether this is an acti\e or passive agent. Field port_num (line 3)
identifies DUV port to \\hich this monitor is attached.
322 Class-Based VE Implementation
The purpose of this class is to instantiate the monitor, driver, and the sequencer, and to
make the necessary connections between these components. Task build() is redefined in this
implementation (lines 18-39) to build the hierarchy rooted at this component.
The creation of each component follows a three step process:
• Specifying configuration settings for the component
• Creating an instance of the component
• Calling task build() of component to build its sub-hierarchy
The monitor is created first. Before creating an instance of the monitor (line 22), config-
uration function set_config_intO is used to set field port_num of the monitor to field port_num
of the agent. Task buildO of the monitor is called after the monitor is created (line 23). Note
that this task is called even though there is no hierarchy below the monitor component. This
is recommended, since later changes may require adding sUb-components to the monitor.
A passive agent does not contain the driver and the sequencer. As such, these compo-
nents are created only if the agent is active (line 25). The sequencer is created (lines 26-28)
with the same flow as that of the monitor. Blocking peek interface used for synchronization
between the sequencer and monitor is initialized by connecting blocking peek port object
duv_ouCvalid_port of the sequencer to the blocking peek imp object duv_ouCvalid_imp of the
monitor (line 31).
Driver is created (lines 33-35) with the same steps as that of the monitor. The sequence
item interface connector objects in the driver and sequencer are connected after both are cre-
ated (line 37).
This class also defines function assign_vi() which is used by higher layer components to
initialize the virtual interfaces in this agent (lines 41-47). As such, field phy-if of driver and
monitor are not set when building the agent, since these fields will be initialized procedurally
after the environment hierarchy is built.
{JP"--~ I
~ p"-",,,,-,,,,,
Figure l3.2 XBar Interface VC Architecture
8 'ovm_component_utils_begin(xbaUvc)
9 'ovm_field_int(port_num,OVM_ALL_ON)
10 'ovm_field_enum(ovm_active_passive_enum, is_active, OVM_ALL_ON)
11 'ovm_component_utils_end
12
13 function new(string name="", ovm_component parent = null);
14 super.new(name, parent);
15 endfunction: new
16
17 function void buildO;
18 super.buildO;
19
20 set_config_int("xbar_xmt_agentO", "port_num", port_num);
21 set_config_int("xbar_xmt_agento", "is_active", is_active);
22 $cast(xmt_agent, create_ component("xbar_xmt_agent". "xba r_xmt_agentO"»;
23 xmCagent.buildO;
24
25 set_config_int("xbar_rcv_agentO", "port_num", port_num);
26 set_config_int("xbar_rcv_ agento" , "is_active" , is_active);
27 $cast(rcv_agent, create _ component("xbar_rcv_agent", "xbar_rcv_ agentO"»;
28 rcv_agent.buildO;
29 endfunction: build
30
31 function void assign_vi(virtual interface xbar_duv_xmUf xmt_phyj
32 virtual interface xbar_duv_rcv_if rcv_phLif);
33 xmt_agent.assign_vi(rcv_phyJf);
34 rcv_agent.assign_vi(xmt_phyJf);
35 endfunction: assign_vi
36 endclass: xbaUvc
This program defines class xbar_ivc derived from class ovm_env (line 1). Field is_active
(line 2) indicates whether this is an active or passive verification component. Field porCnum
(line 3) identifies the DUV port to which this component is attached.
The creation of each component follows the same three-step process outlined for creat-
ing the components in the receive agent (section 13.6.1.4). Component xmt_agent is created
(line 22) after its relevant configuration settings are made (lines 20, 21), followed by calling
task build() of this component. The receive agent is created using the same process (lines
25-28).
324 Class-Based VE Implementation
This class also defines function asslgn_vlO which is used by higher layer components to
initialize the virtual interfaces in this verification component (lines 31-35). This function is
implemented by calling function asslgn_vl() of receive and transmit agents.
The implementation of the XBar verification environment top level is shown below:
I
.Program 13.12: XBar verification environment top level
1 class xbar sve extends ovm env'
2 'ovm~component_utils(xba;_sve)
3 int unsigned num_ports=4:
4
5 xbaUvc ivcs[):
6 xbar_scoreboard1 sbs [):
7
8 function new (string name='''', ovm_component parent=null):
9 super.new(name, parent):
Building the Hierarchy 325
10 endfunction
11
12 virtual function void buildO;
13 string inst_name;
14 super.buildO;
15
16 ivcs = new[num_portsJ;
17 sbs = new[num_portsJ;
18 for (int i=O; i< num_ports; i++) begin
19 $sformat(inst_name, "xbaUvc[%Oq]", i);
20 set_config_int(insCname, "port_num", i);
21 $cast(ivcs[iJ, create_component("xbaUvc", inst_name»;
22 ivcs[iJ.build();
23
24 ivcs[iJ.assign _ vi(xbaUestbench.xbar_ duv_ xmti,
25 xbaUestbench.xbar_duv_rcvi);
26
27 $sformat(inst_name, "xbar_sb[%Od]", i);
28 set_config_int(inst_name, "port_num", i);
29 $cast(sbs[i], create_ component("xbar_scoreboard 1", inst_name»;
30 sbs[iJ.buildO;
31 end
32 for (int i=O; i< numJ)orts; i++) begin
33 ivcs[iJ.rcv_ agent.monitor.pkt_bdcsCport.connect( sbs[iJ.rcv_listeneUmp);
34 for (intj=O; j< 4; j++)
35 ivcs[iJ.xmt_agent.monitor.pkCbdcst_port.connect(sbs[j].xmUisteneUmp);
36 end
37 endfunction: build
38 endclass: xbar_sve
This program defines class xbar_sve derived from class ovm_env (line 1). This compo-
nent contains field nurn_ports (line 3) which controls the number of ports supported by this
environment. Dynamic arrays ivcs and sbs (lines 5, 6) hold objects corresponding to
instances of xbar_ivc and xbar_scoreboard components; one for each port.
The :virtual interface in the receive and transmit agents of each ivcs are initialized by
calling function assign_vi() of the interface verification component with the appropriate inter-
face block as argument (line 24).
Each scoreboard contains two listener ports that should be connected to the broadcast
analysis ports of each ivcs component according to the diagram in figure 13.3. These connec-
tions are made by calling function connect() of the analysis port objects (lines 32-36). The
implementation of the scoreboard components is shown in section 13.10.
This example shows the implementation oftestcase xbar_test_default that only creates an
instance ofxbar_sve, the top level component of the verification environment, without modi-
fying any of its default settings. Task run() of this test provides a simple stop mechanism
where the test is terminated after a fixed length of time (line 16-19). Section 7.4.2 describes
the infrastructure provided by the OVM class library for supporting more elaborate require-
ments for ending the run phase ofthe simulation.
The following example shows the implementation of a new testcase derived from class
xbaUesCdefault that creates an environment with passive transmit agents and active receive
agents:
(section 7.2) guarantees that the configuration setting specified at this level overrides any
settings made in the lower layers of the hierarchy.
The use of global function run_testO to start a previously defined testcase is shown in
the next section.
In the above example, the implementation of all classes is first included (line 3). An ini-
tial block is then used to call global function run_testO with the name of a previously defined
testcase. Section 7.4.1 describes the infrastructure provided by the OVM class library for
selecting a test and starting the run phase of the simulation.
The implementation of xbaUestbench is shown below:
Initial block
Irun_Iest(xbauesIO)
'Program 13.17: XBar receive monitor Run Phase (included in program 13.6)
1 task runO;
2 forever begin
3 II wait for duv to indicate output is valid
4 @(posedge phy_if.clk iff phyjf.outv[port_numj === 1);
5 =
rcv_pkt newO;
6 phLif.get_data_addr(port_num, rcv_pkt.data, rcv_pkt.src_addr);
7 rcv_pkt.dest_addr = port_num;
8
9 lIaliow function peekO (program 13.6) to complete when valid packet appears
10 -> duv_ouUs_valid_event;
11
12 II wait for outside agent to indicate packet was read
13 if(phyjf.outr[port_numj !== 1'b1)
14 @(posedge phyJoutr[porCnumj):
15 @(negedge phy3.clk);
16
17 lIactivate coverage group COy _packet
18 -> cov_pkt_collected;
19
20 /1 broadcast packet after DUV receives acknowledgement that it was accepted
21 pkt_bdcst_port.write(rcv_pkt);
Driver Run Phase 329
22 end
23 : endtask: run
~--------------------
This implementation emits event duv_ouUs_valid_event when the DUV places a new
packet on its transmit port. Emitting this event allows task peek() of the monitor (program
13.6) to complete, therefore allowing all components blocked on this task to synchronize to
this condition detected by the monitor. Note that the monitor is unaware components, if any,
that are blocked and waiting for this condition to be detected.
Event cov_pkCcoliected is emitted (line 18) after the monitor detects that the DUV has
received acknowledgement that the packet placed by the DUV on its transmit interface was
accepted by the outside agent (i.e., the driver in the receive agent in active mode, and a
design component in passive mode), thereby activating coverage group coy_packet. After
emitting this event, the monitor uses analysis port pkt_bdcsCport to broadcast the packet it
collected from the DUV transmit port (line 21). The monitor is unaware of components, if
any, that use the packets it broadcasts.
The implementation for the transmit agent monitor is similar to the one for receive
agent monitor.
23
24 task collect_packet(int port, xbar_packet pkt);
25 assert(phy-if.outr[port] == 1'bO);
26 @(posedge phLif.clk iff phLif.outv[port] === 1'b1);
27 repeat (pkt.wait_cycle) @(posedge phLif.clk);
28 phy].outr[port] <= 1'b1;
29 pkt.dest_addr = port;
3D phy_if.get_data_addr(port, pkt.data, pkt.src_addr);
31 @(negedge phyJclk);
32 phy].outr[port] <= 1'bO;
33 endtask: collect_packet
34
35 task xbar_rcv_driver::reset_signalsO;
36 forever @(negedge phLif.reset) phLif.outr =1'bO;
37 endtask: reset_signals
An important aspect of the above implementation is the mechanism for this driver to
receive packets from the receive sequencer. The driver uses field wait_cycle of the packet it
receives from the sequencer to decide how many cycles to wait before collecting the next
packet from the DUV transmit port (line 27). The interaction between the driver and the
sequencer is shown in function geCpacket_from_sequencer (lines 8-12). The driver first calls
task geCnexUtem() of its sequence item interface connector secl.Jtem_prod_if (line 10). Once
the driver has completed driving the next packet from the DUV transmit port, it informs the
sequencer that it has completed processing the previous item by calling function item_doneO
of seq_item_prod_if (line 20). Sequence item interface connector seq_item_prod_if is linked
with the sequencer when building the receive agent component (program 13.1 0 line 42).
13.9 Sequencer
The implementation of sequencers, their interaction with the driver, and their associated fea-
tures are described in detail in chapter 14. However, a simple example is shown in this sec-
tion on how to override the default behavior of sequencer for the receive agent sequencer.
This implementation is shown below:
I
Program 13.19: Overriding default behavior for receive sequencer
1 class xbar_rcv_seq_idle extends ovm_sequence;
2 'ovm_sequence_utils(xbar_rcv_seqjdle, xbar_rcv_sequencer)
3
4 function new(string name="", ovm_object parent=null);
5 super.new(name);
6 endfunction
7
8 virtual task body();
9 'message(OVM_INFO, ("idle rcv sequence"»
10 endtask
11 endclass: xbaucv_seqjdle
12
13 class xbar_tesUdle_rcv extends xbar test default;
14 .ovm_component_utils(xbar_test_idl9_rcv)
15
16 function new(strif1g name = "xbar_base_test", ovm_component parent=null);
17 super.new(name, parent);
18 endfunction: new
19
Scoreboarding 331
In this example, a new sequence xbar3cv_seq_idle is first defined where the body of the
sequence (lines 8-10) is defined so that no sequence item is generated. A new testcase
xbaUesUdle3cv is created by deriving from xbaUesCdefault (line 13). The default sequence
started by sequencers in receive agents is then changed using function secconfig_stringO to
set field default_sequence of any sequencer whose instance name matches string
..·.xbar_rcv_agentO.rcv_sqnsr.. to xbar3cv_seq_main (lines 21-22). The buildO method of parent
class is then called (line 23) which creates the verification environment as described in
xbaUesCdefault (program 13.13). Testcase xbaUesUdle_rcv can then be run using global
function run_test(xbaUesUdleJcv) (program 13.15 line 7).
13.10 Scoreboarding
The xbar scoreboard, as described in figure 13.3, is defined to have two analysis imp connec-
tor objects for listening to packets broadcasted by transmit and receive monitors. The com-
plication with requiring two analysis imp connector objects in the same class object is that a
single class declaration cannot contain two standard analysis imp objects since a class decla-
ration can contain only one function writeO, and each standard analysis imp connector object
requires a dedicated function write() (section 9.5).
Approaches for implementing this behavior include:
• Defining new analysis imp types thereby allowing two analysis imp objects to coexist
in the same component
• Using a subscriber component as a subcomponent of the scoreboard, that listens on
each of the incoming analysis interfaces of the scoreboard
These approaches are described in the following sections.
ovm_component, and contains a predefined analysis imp object. In this approach, two compo-
nents of type ovm_subscrlber are placed inside class ovm_scoreboard. Each ovm_subscriber
object contains a reference to its parent scoreboard, and function write() in each
ovm_subscriber object can be redefined to perform the required operation on the scoreboard
in the parent component. Since a single class object can contain multiple export connector
objects, the analysis imp connector in each ovm_subscrlber object can then be routed to com-
ponents outside xbar_scoreboard through analysis export connectors placed inside the
xbar_scoreboard object. .
xbar_scoreboard
Queue
xbar xmt listener
INSERT analysis_export ~,-----< xmUistenerJmp
'--'--'-----1-
Figure 13.5 shows the internal architecture of the XBar scoreboard component. The
implementation of this scoreboard contains two analysis export connectors rcv_listener_imp
and xmUisteneUmp of type ovm_analysis_export (note that these objects are export objects
but are named as imp objects since they appear as imp objects to outside components). These
connector objects are connected to the analysis imp objects inside ovm_subscriber compo-
nents. Function wrlte() of xbar_xmUlstener block derived from ovm_subscriber is defined to
place the incoming packet in the scoreboard queue if the packet destination address matches
the port number for this scoreboard. Function write() of xbarJcv_listener component derived
from ovm_subscriber is defined to match the incoming packet against the contents of the
scoreboard queue. Connector objects rcv_listeneUmp and xmUisteneUmp of xbar_scoreboard
are named as imp objects even though these are in fact export connector objects. The reason
is that these connector objects appear as analysis imp connectors to components outside the
scoreboard component.
The implementation ofxbar_xmUlstener component is shown in the following:
.The DVM class library provides powerful constructs and concepts for modeling verification
scenarios as a sequence of transactions. The concepts and constructs for implementing this
model of a verification scenario is described in detail in chapter 8.
Chapter 13 introduces the XBar crossbar switch as a sample design and showed the
implementation of its verification environment containing only interface verification compo-
nents. This chapter introduces the XBar communication protocol and describes in detail the
implementation of a complete verification environment that uses transaction sequences gen-
erated across multiple layers of interface, module, and system verification components to
verify the features of this communication protocol. Techniques and features of generating
sequences using the DVM class library are described through the presentation of this sample
environment.
14.1._XBar Communication
_ _ _ ·__________ m_. ___· ___ _ Protocol
--------_._----_._----
The XBar communication protocol is based on the single-cycle packet routed through the
XBar crossbar switch (section 12.2).
The following data encapsulations are used in constructing the XBar communication
protocol:
• The XBar Packet is the fundamental item of communication, consisting of:
• Source address (src_addr)
• Destination address (desCaddr)
• Payload (data)
• The XBar Frame defines an abstraction over the XBar packet where values of field
data are mapped into the following object kinds: SOF, BEACON_REQ, BEACON_REPLY,
DATA_REQ_A. DATA_REQ_B. DATA_REPLY_A, and DATA_REPLY_B.
• The XBar Transfer is composed of XBar frames and supports the following types of
transfers across XBar ports:
• Beacon request
338 Verification Scenario Generation
The XBar communication protocol is defined to facilitate the illustration of the variety
of protocol handling requirements that may exist in real designs, while including as few
low-level details as possible (e.g., the reader may note that the XBar protocol supports only
the transfer of a single bit in each data transfer). The changes required to tum this protocol
into a real-life protocol, however, do not affect the techniques described for verification
related handling of this protocol. For example, in a real protocol, the data field size may be
increased arbitrarily to support larger payloads or the packet definition may be redefined to
occur over multiple cycles, allowing for more complex data exchange scenarios. This added
protocol complexity can be handled by the driver component that drives the packet at the
XBar physical interface.
for managing beacon request and replies and for abstracting packets into transfers,
and an application layer handling the generation of data request transfers and
responding to data request transfers with data reply transfers.
XBar frames generated by the verification environment and arriving at a DUV receive
port must follow the ordering requirements described for the XBar protocol (e.g., two frames
belonging to a transfer must appear before a frame belonging to a new transfer appears). On
the other hand, XBar frames arriving at a DUV transmit port and collected by the verification
environment may come from any of the other DUV ports. As such, XBar frames arriving at a
DUV transmit port may appear in any order (two SOF frames, coming from two separate
DUV ports, appearing back to back). This behavior affects the monitoring requirements at
the transmit and receive ports of the XBar design.
The implementation of class xbaUrame is shown in the following program. An XBar frame
is used to provide protocol related meaning to fields of an XBar packet. This class, therefore,
is derived from class xbar_packet:
Class xbaUrame includes logical fields kind and payload that provide a view of this
object as an XBar frame. This class also includes fields data, src_addr, dest_addr, and
wait_cycle since it is derived from class xbar_packet. This class should contain the set of con-
straints necessary for generating random values that are consistent across the set of logical
and physical fields, regardless of which field is constrained. For example, if kind is con-
strained to DATA_REQ_B and payload is constrained to 1, then randomization should set field
data to 4'b1011. Alternatively, if field data is constrained to value 4'b1011, then randomization
should set kind and payload to DATA_REQ_B and 1, respectively (section 11.2.2). Constraints
defined for this class (lines 14-22) implement this behavior. These constraints are used for
initializing this class to the contents of an xbar_packet object. In the constructor for
xbar_frame (lines 24-31), fields src_addr, desCaddr, data, and waiCcycle are initialized from
argument pkt, and then fields kind and payload are assigned by randomizing their value.
Given the simple description of the XBar protocol, no special class is defined for an
XBar transfer. In this implementation, class xbar_frame and its field kind is used to indicate
the type of transfer that is to be sent.
The XBar communication protocol, as outlined in section 14.1, reflects functionality beyond
that described for the XBar crossbar switch introduced in section 12.2. The XBar switch has
the simple function of routing single-cycle packets through XBar ports, subject to handshak-
ing at both the source and destination ports. The XBar communication protocol, however,
reflects the existence of additional hardware layers that depend or act on beacon and data
trans fers.
Application
} - Layer
XBar Module
XBar System
Figure 14.1 shows a possible evolution of the XBar design from a block to a system that
supports the XBar communication protocol. XBar block contains only the XBar crossbar
switch. -YBar .modl/le is created by adding a component that requires traffic passing through
the XBar SWitch to f~llo\\" transfer ordering requirements. and beacon request and reply
transfers to be appropriately sent by all ports . ."-Bar sl'sfem is created by adding a component
that requires data reply transfers to be sent back from ports that receiYe data request trans-
XBar Sequence Generation Architecture 341
fers. It should be noted that the integration flow for moving from an XBar block to an XBar
system, as described here, is decided specifically to motivate the need for different sequence
generation techniques described in this chapter.
P3 P2 P1
XBar Block
Figure 14.2 XBar System Verification Environment Architecture
The verification environment architecture for the XBar system level design is shown in
figure 14.2. This architecture reflects the multi-layer implementation of the XBar system. In
full deployment of this environment during system level verification, each verification com-
ponent perfonns the following activities:
• XBar Interface VC:
• Implements the abstraction ofxbar_packet for interacting with the XBar design
through XBar packets.
• During block level verification, randomly generates XBar packets to be trans-
mitted and generates random packets indicating the latency with which packets
should be collected from DUV ports.
• During module and system level verification, acts as the driver of traffic gener-
ated by the module and system VC components.
• Has the necessary scoreboarding structure in place to check for correct transmis-
sion of packets across the XBar switch.
• Can remain in place in passive mode as the real application layer is attached to
the system and continues to scoreboard for packets traveling across the XBar
design.
• XBar Module VC
• Is not present during block level verification.
• Implements the abstraction of XBar transfers composed of XBar frames, and
interacts with interface VC components using the abstraction of XBar frames.
• During module level verification, generates random beacon request transfers,
replies to incoming beacon request transfers, and generates random data request
and reply transfers to any of the interface VC components. Data request and
response frame traffic generated at this level does not comply with the
request/reply requirements for data transfers.
• During system level verification, continues to generate beacon request and bea-
con reply transfers, making beacon transfer handling invisible to system VC
342 Verification Scenario Generation
These techniques will be illustrated in the following sections where the implementation
for each layer of the verification environment is described.
1.1.~. ~. F'J(1 t s~(l1!.~.~~.~~ ___ ___ ._. __ ._. _. ._. _ .________.__ .___._. _. _...... _. ______ ._____._. ._._ . ___ . . _ _
The interaction between a flat sequence and a sequencer is described in section 8.8.1. Flat
sequences produce only sequence items which are then passed to a driver through synchro-
nized interaction with a sequencer. Figure 14.3 shows the structural relationship between
these components. As shown, a sequencer may contain one or more sequences that interact
with the sequencer to pass their generated items to the driver. Sequences may require status
information from the environment in order to generate their next item (see reactive
sequences in section 14.5). The architecture is defined so that all such information is pro-
vided by only the monitor. A sequencer can interact with mUltiple sequences, in which case,
the next available item by a relevant sequence is selected by the sequencer. In addition,
sequences are defined to allow the sequencer to choose from the next relevant verification
item available (see conditional sequences in section 14_10).
Monitor
~
~
Seauencer Driver
ISequence 1 I.... y
Sequencer
I Sequence 2 Arbiter ~
1
.... ...
I Sequence 3 L.. ..
y
I
I '---
The simplest form of a sequence consists of a non-reactive flat sequence being passed to
the driver through the pull-mode interaction. A non-reactive sequence does not need envi-
ronment status information to generate the next item. The generation of this simple form of
sequence is described in this section. The discussion in this section is used to introduce more
advanced sequence types in the following sections.
The first step in creating a sequencer is to define a default sequence item. This sequence
item is used in generating the default behavior for a sequencer as described later in this sec-
tion. The following program shows the declaration for a class object that will be used as the
default sequence item for the transmit direction of the sequencer for XBar interface verifica-
tion em-ironment:
4 intport_num;
5 constraint valid_src_addr {src_addr == port_num;}
6 constraint valid_dest_addr {dest_addr != src_addr;}
7
8 function new(string name="xbar_xmtJ>acket");
9 super.new(name);
10 endfunction: new
11
12 function void pre_randomizeO;
13 xbar_xmCsequencer sqnsr;
14 $cast(sqnsr, geCsequencer());
15 if (sqnsr != nUll) port_num = sqnsr.port_num;
16 endfunction
17 endclass: xbar_xmt_packet
L---_____________________
'Program 14.3: Setting default sequence item for XBar transmit sequencer
class xbar_xmt_sequencer extends ovm_sequencer;
In this example, class xbar_xmt_packet is defined as the default sequence item for driver
xbar_xmCsequencer (line 4). The full implementation of this sequencer is similar to that of
xbarJcv_sequencer shown in program 13.7.
This interaction consists of two main steps: 1) driver requesting and receiving the
sequence item from the sequencer (line 5), and 2) driver informing the sequencer that the
item has been processed so that the sequencer can complete the generation steps for this
sequence item (line 8). This example shows a pull-mode implementation where the driver
requests the next item from the sequencer when it is ready to process that item, and blocks if
necessary, until the sequencer can produce an item. Instantiation and connection of sequence
item producer interface seq_item_prod_if is similar to that for the receive driver (programs
13.9 and 13.10).
The GVM class library provides the predefined sequence type ovmJeqJsp_sequence to
automate the handling of separate request and response item types.
for this default behavior of the sequencer can also be added to the implementation of
xbar_xmCpacket in a similar fashion.
The addition, and effect, of adding a new sequence to the sequence library is shown in
the previous section. The following example shows how the value of fields count and
default_sequence can be customized as part of building the verification environment:
The above program defines new test xbar_test_modifed_behavior that can be executed by
passing its name to the global function run_test() (section 13.6.6), and follows the procedure
described for defining new testcases for the XBar verification environment (section 13.6.5).
In this implementation, configuration function set_config_int{) is first used to set field count of
all transmit sequencers to value 0 (line 9). Functions set_conUnt{) and seCconfi9_string{) are
then used to redefine the values for fields count and default_count of the sequencer at port 0
(lines 10-12), thereby overriding the setting for port 0 made on line 9. These settings will
take effect while the environment hierarchy is being built by calling function build() (line 11).
With these configuration settings, the running of this testcase results in one execution of
sequence xbar_xmt_seq_send_3_packets by the sequencer at port 0, and no sequences in the
sequencer of any of the other ports.
Most behaviors can be implemented by adding a new sequence to the sequence library,
redefining the default sequence, and changing field count of the sequencer. For behaviors that
cannot be implemented through these steps, the default behavior of a sequencer can be mod-
ified by redefining its predefined task runO. The following example shows this approach:
24 endtask
25 endclass: xbar_xmt_seq_send_special
19). It then executes sequence item xmUtem a total of ten times without providing any con-
straints.
transfer, producing an error condition at a DUV port upon a specific value observed at the
DUV port, etc. A sequence that requires infonnation from mUltiple design ports should be
placed in module or system VCs where conditions at all DUV interfaces can be tracked
through the monitors in interface VCs.
In deciding the best implementation for a reactive sequence, the following issues must
be considered:
• A sequence is not aware of the driver and the driver is not aware of the sequence.
They interact only through the sequencer. As such, a sequence cannot read from a
driver and a driver cannot write to a sequence.
• A monitor is not aware of any ofthe other components in the environment. A monitor
provides status about the current state of the environment. Other components in the
environment subscribe to this status information as needed. This behavior is neces-
sary to allow a monitor to be used in passive mode where the driver and the
sequencer are removed.
Given this independence requirement between verification environment components,
the following approach is used for implementing a reactive sequence:
• IdentifY all environment status information needed by a reactive sequence to take the
next step in its flow.
• Implement the monitor to provide this information to the sequencer, either through
broadcasting on an analysis port or through transaction ports.
• Add to the sequencer the necessary fields that allow access to this status information
in the monitor. By using this approach for reading status information produced by the
monitor, it is possible to remove the sequencer in passive mode without needing to
change the monitor implementation.
• In a reactive sequence, use transaction interfaces made available in its sequencer to
detect and react to relevant environment conditions.
The approach outlined above allows the component independence requirement to be
satisfied while providing a reactive sequence with the information it requires. The following
example shows the implementation of a reactive sequence that generates wait-cycle values
based on the source address of the incoming packet on the receive channel.
The definition for xbarJcv_packet is not shown, but is similar to the one for
xbar_xmCpacket (program 14.2). The declaration of blocking peek port connector object
duv_ouCvalld_port in sequencer xbarJcv_sequencer, and connecting it to the blocking peek
imp connector object in xbar_rcv_monltor is shown in section 13.6. The implementation of
sequence xbar_rcv_selLrand_wait_cycle contains configuration field seq_waiCcycle (line 9,
10). Execution of this sequence starts by calling function peekO of transaction interface
duv_out_valid_port (line 14). This function call returns when the monitor detects that the
DUV has placed a new packet on its transmit port. The sequence then generates an item with
a wait cycle value that depends on the source address of the incoming packet (lines 16-21).
Note that in this implementation, no direct interaction with the receive slave driver takes
place. In this flow, the receive slave driver waits to receive the first item from its sequencer
providing information on how many cycles to wait before collecting an incoming packet
from the port, while the monitor is independently looking for the arrival of the first packet.
Once this packet arrives, the monitor allows function peekO to complete allowing the
sequence to proceed, thereby generating the next item. This item is then passed to the receive
slave driver, which will in tum use this item to decide how long to wait before collecting the
incoming packet.
The mechanism shown in the above example can be used multiple times in a row to
generate a complex sequence of handshaking with a DUV port. The sequence shown in the
above example must replace the default behavior for the xbarJcv_sequencer. The mecha-
nisms described in section 14.4.3 can be used to carry out this task.
Module VC (xbar_mvc)
DUV
A virtual sequencer allows a single sequence to interact with multiple sequencers and
hence interact with multiple drivers. A virtual sequence can only execute subsequences,
either in the local virtual sequencer or in a downstream sequencer. This means that a virtual
sequence can execute a subsequence on any sequencer in the verification environment as
long as that subsequence is in the sequence library of the target sequencer. The sequencer
belonging to the XBar module VC is implemented using a virtual sequencer so that each
instance of the module level sequencer can interact with transmit sequencers in all of the
interface VCs. Note that the virtual sequencer for the module VC is placed in the top level
environment so that its implementation can be changed if needed without needing to modify
the implementation of the module vc.
The virtual sequencer of the module VC is implemented through the following steps:
• Implementing the virtual sequencer (mxbar_sequencer in figure 14.4)
• For each virtual sequencer, adding one sequence consumer interface (section 8.9) for
each downstream sequencer, and connecting it with the sequence producer interface
in the downstream sequencer (sci connectors in figure 14.4)
• Implement sequences in the sequence library of downstream sequencers that will be
executed by local virtual sequences (sequence seq3 in figure 14.4)
• Implement virtual sequences that execute local sequences and sequences on down-
stream sequencers through appropriately connected sequence consumer interfaces
(sequence VirtSeq1 in figure 14.4)
These steps are described in the following sections:
354 Verification Scenario Generation
Building the hierarchy for the XBar module VC is similar to that defined for the XBar
interface VC and is not shown in this text. The next section shows how each virtual
Virtual Sequences 3SS
The top level environment mxbar_sve is derived from class xbar_sve (line I). This envi-
ronment instantiates a dynamic array of type xbar_mvc type objects (line 4). Task buildO of
this class is implemented to first call task buildO of its parent class (line 12), thereby building
the environment shown in figure 13.3. It then creates objects in array mvcs (line 14) and
builds the module ve at each index (lines 15-21).
In the next part, connections are made between each module VC and downstream inter-
face yes. For the module ve at port i, packet listeners in recei\'e and transmit monitors
(implemented as analysis imp objects) are connected with the corresponding packet broad-
casters (implemented as analysis port objects) of the interface ve at port i (lines 24, 25).
Next, a loop is used to create four sequence consumer interface objects for the virtual
356 Verification Scenario Generation
sequencer at port I (line 28), and then connecting each with the sequence producer interface
in its corresponding downstream sequencer (line 29). Note that each new sequence consumer
interface is created by calling function get_seq_cons_lfO (line 28), since this function creates
an object if one doesn't exist.
Note that each sequence consumer interface is connected to four different sequence pro-
ducer interfaces in the downstream sequencers. And each sequence producer interface in the
downstream sequencers receives connections from four different virtual sequencers. This
structure shows an example of virtual sequencers executing sequences on different down-
stream sequencers, and downstream sequencers allowing multiple virtual sequencers to exe-
cute sequences on them.
Module Virtual
Sequencers
~
Datoe SOF
Interface ves
"----I 1,-,-_
I
Program 14.16: Default sequence for XBar module VC
1 class mxbar_seq_main extends ovm_sequence;
2 'ovm_sequence_utils(rrixbar_seq_main, mxbar_sequencer)
3
4 function new(string name="", ovm_object parent=null);
5 super.new(name);
6 endfunction
7
8 virtual task bodyO;
9 mxbar_seq_send_beacon_request send_beacon_req_seq;
10 mxbar_seq_send_beacon_reply send_beacon_repILseq;
11 mxbar_seq_send_data_transfer send_data_seq;
12
13 fork
14 forever
15 'ovm_do(send_beacon_req_seq)
16 forever
17 'ovm_do(send_beacon_reply_seq)
18 repeat(p_sequencer.count)
19 'ovm_do(send_data_seq)
20 join
21 endtask
22 endclass: mxbar_seq_main
sequence of a sequencer is not started at all if field count of the sequencer is set to zero. As
such, if field count is used as a controlling field in a newly defined sequence (line 18), care
should be taken so that it is never set to zero. Sequences started in all these threads are
treated as subsequences and have this sequence as their parent sequence. This distinction is
important because of how it affects the behavior of grabbing sequence. The implementation
of grabbing sequence mxbar_seq_send_data_transfer is described next.
A data transfer requires two frames to be transmitted. The XBar communication proto-
col indicates that these two frames can be interrupted only by beacon request or reply frames.
The implementation of the sequence implementing this behavior is shown next:
Functions grab() and ungrab() of the sequence consumer interface are used to grab and
ungrab the downstream sequencer before sequence execution starts and after it ends (lines 17
and 22).
An important aspect of this example is that the grabbing is performed on the parent
sequence of this sequence (i.e., mxbar_seq_main in program 14.16). The parent sequence is
Layered Sequences 361
obtained by calling the predefined function get_parent_seq() of ovm_sequence (line 17, 22).
The reason is that by grabbing the parent sequence, any subsequence started by the parent
sequence can still access the downstream sequencer that is being grabbed by this sequence,
therefore allowing beacon request and reply frames started by the root sequence to interleave
the two frames generated by this sequence. Note that this interleaving is allowed if both the
data transfer and beacon requesUreply transfers need to use same downstream sequencer to
execute. In other words, a local beacon request transfer cannot interleave the frames being
sent to the local downstream sequencer by the virtual sequencer in another module VC.
From Interface
VC Monitor
To Interface
VC ""~'u,,'",'''
VirtseQ2 1-+-+-------1~
• Implementing the module virtual sequence that receives sequence items from the
sequencer in the system VC
• Modifying the default behavior of the system VC sequencer by implementing new
sequences in its sequence library
These implementation steps are described in the remainder of this section.
The first step is to define a default sequence item type for the system VC sequencer. In
this implementation, class xbaUrame (program 14.1) can be used to model the type of trans-
fers that the system VC sequencer may need to generate. Given, however, that system VC is
aware only of data transfers, a new class must be defined so that the default behavior of the
system VC sequencer produces only data transfer items. This implementation is shown in the
following program:
Class xbar_data_frame is derived from class xbaUrame and only adds a constraint that
limits the kind of transfers that can be represented by this object. Note that the definition of
this new class is necessary for the default behavior of the system VC sequencer to be useful.
Another option is to use class xbar_frame as the default sequence item for the system VC
sequencer, but in that case, the default sequence of this sequencer must be modified to run a
sequence that executes an item whose kind is limited to data transfers.
The implementation of the system VC sequencer is the same as the one for interface VC
sequencer (program 13.7), with the difference that sequence item xbar_data_frame is used as
the default sequence item tyPe for the system VC sequencer. With this implementation, the
system VC sequencer will start generating random data transfer frames at the start of the run
phase of the verification.
The implementation of module VC sequencer was shown to include sequence item pro-
ducer interface sve_pkt_prod_if (program 14.11, line 6). The implementation sxbar_sve, XBar
environment top level that includes system VC components xbar_svc, is similar to the imple-
mentation of mxbar_sve (program 14.12), In this implementation, class sxbar_sve is derived
from class mxbar_sve and adds the system VC components to the environment while making
the necessary connections between system VC and module VC components. The connection
between the sequence item producer interface sve_pkCprod_if of the module VC sequencer
and system VC sequencer is carried out in this step and follows the same syntax shown for
connecting XBar receive driver xbar_rcv_driver to XBar transmit sequencer
xbar_rcv_sequencer (program 13.10 line 44).
module VC sequencer that uses this interface to receive the next data frame it sends to down-
stream sequencers. The implementation of this sequence is shown in the following program:
(i.e., sequences to send beacon request and reply frames) to have access to this downstream
sequencer. This approach allows the transfer of beacon request and reply to interleave a data
transfer, but prevents other data transfers to interleave another data transfer.
One possible configuration for making use of this layered sequence is as follows:
• Implement a sequence in module sequencer that starts three parallel subsequences for
sending beacon request transfers, sending beacon reply transfers, and executing
sequence mxbaUayered_seq_send_data_transfer a given number of times controlled
by field count of the module VC sequencer.
• Configure the system VC sequencer to continuously generate sequence items. The
actual number of data transfers injected into the DUV is controlled by the number of
sequence items that the module virtual sequencer is configured to receive from the
sequencer in system VC. The system VC sequencer blocks if the module VC
sequencer stops accepting the items it is producing.
Generation of data transfer sequences, as described in this section, does not differentiate
between data request and reply transfers. The extension of this environment to comply with
data transfer requirements of the XBar communication protocol is described in the next sec-
tion.
Multiple sequences can pass their generated items to their sequencer. A sequencer picks the
next sequence item in a first-in-first-out (FIFO) order from those sequence items that are rel-
evant to the current verification context. The default implementation is to assume all
sequence items produced by all sequences are relevant. This default behavior, however, must
be changed in order to support sequence generation schemes where not all sequence items
are relevant at all times.
Consider a requirement that the XBar system VC sequencer should reply immediately
to an incoming data request transfer even if an outgoing data request transfer is pending. One
approach to support this requirement is to create two concurrently running sequences in the
XBar system VC sequencer: one for replying to incoming data request transfers, and one for
generating outgoing data request transfers. The required behavior for the sequencer is then to
pick the next sequence item from the sequence generating data reply transfers if an incoming
data request transfer is detected.
ing peek transaction port data_req_collected_port, and add the received data request to the
queue. The definition for the run() method of the sequencer is also modified to start this
method in parallel with the default behavior of the driver (lines 26-31). Given the above
implementation, queue data_req_pkCqueue will contain all unprocessed data request transfers
in the order of their arrival.
The following program shows the implementation of function ISJelevant() and task
waiCforJelevant() for sequences that generate and reply to a data request transfers:
The above example provides a simple definition for when a sequence is relevant to the
verification context. The same approach can. however. be used to define any relevant criteria
as needed by the verification context.
Sequence Synchronization with End of Run 367
IProgram 14.22: Implementing sequencer support for objection to end of run phase
class mxbar_sequencer extends ovm_virtual_sequencer:
8 p_sequencer.raise_objectionO;
9
10 'ovm_do_with(seq_send_frame, {xfr.kind==SOF;
11 xfr.dest_addr==dxfr.desCaddr; xfr.src_addr==dxfr.src_addr;})
12 'ovm_do_with(seq_send_frame, {xfr.klnd==dxfr.kind;
13 xfr.dest_addr==dxfr.desCaddr; xfr.src_addr==dxfr.src_addr;})
14 sci.ungrab(geCparent_seq());
15
16 p_sequencer.drop_objectionO;
17 #1000; /I walt for some time, optionally randomize by using a field instead
18 endtask
19 endclass: mxbar_seq_send_data_transfer
This behavior is achieved in the action block of this sequencer by calling function
raise_obJection() of its sequencer betore executing the first sequence item (line 8), and calling
function drop_objectlonO of its sequencer after executing the second sequence item (line 16).
Depending on the synchronization requirements of a sequence with the end of run
phase, the functions to raise or drop objections can be placed in the predefined callback
methods of a sequence. For example, if the run phase should not end while the action block
of a root sequence is executing, then functions to raise and drop objections can be placed in
tasks pre_bodyO and posCbodyO of that sequence. Figure 8.5 provides a view of the order in
which callback methods are called for root sequences, subsequences, and sequence items,
and can be used as a guide to deciding the best place for a sequence to raise and lower objec-
tions to ending the run phase.
369
PART 6
Assertion-Based Verification
370
CHAPTER 15 Property Specification
and Evaluation Engine
A major factor affecting verification productivity is how concisely verification intent can be
described. A concise description makes verification intent easier to specify and also easier to
understand. The need to describe a property, either in the design or the verification environ-
ment, is an inherent part of any verification activity. As such, the ability to write concise, yet
clear descriptions of design properties not only increases verification productivity, but also
improves verification environment reusability by allowing other engineers to better under-
stand the intent behind property descriptions.
The quality of a proprety specification language is measured in its expressiveness, a
measure that is meant to maximize the following competing priorities:
• The ability to model as complex a description as possible
• The ability to write as concise a description as possible
• The ability to write as readable a description as possible
As in any other natural or computer language, one's ability to reach a good balance
between these competing priorities is developed over time and through practice and experi-
ence.
To address the need for a powerful property specification mechanism, property specifi-
cation languages have been defined, either as independent standards (e.g., Property Specifi-
cation Language [PSLJ) or as part of hardware verification languages (e.g., temporal
expressions in the e language). SystemVerilog provides an extensive set of constructs for
writing property specifications.
Assertion-based verification (ABV) is a verification approach that makes extensive use
of property specifications. ABV methodology provides best practices for improving verifica-
tion quality by using assertions. A good understanding of how properties are specified is
essential for taking maximum advantage of the benefits provided by ABY. This chapter
describes in detail how properties are described in SystemVerilog. Assertion-based verifica-
tion, \,·here property specification is needed. is described in chapter 16.
372 Property Specification and Evaluation Engine
A property, in its most abstract form, is a relationship between Boolean conditions across
multiple time steps. The simplest property is a single Boolean condition defined at one point
in time, and the most complex property is a relationship between multiple Boolean condi-
tions within one and across mUltiple time-steps. As such, a property specification language
shcmld provide language constructs for defining Boolean conditions based on design and ver-
ification environment signals, and for specifying relationships between these Boolean condi-
tions across multiple time-steps.
In addition to these constructs, a powerful property specification language should also
provide facilities for managing the following aspects of property specification:
• Ability to define a property based on Boolean conditions sampled (i.e., evaluated) at
multiple asynchronous (i.e., unrelated) clocks in order to facilitate properties defined
across multiple clock domains.
• Ability to define properties in a hierarchical fashion so that a complex property can
be constructed from simpler properties.
• Ability to write a property based on formal arguments so that a given specification
can be used in multiple places in the environment using different sets of actual sig-
nals replacing the formal arguments.
• Efficient ways to define when a property should hold (i.e., defining property sam-
pling event).
• Efficient ways to define when a property need not hold (i.e., disabling property).
SystemVerilog provides three abstraction layers for defining a property:
• Boolean Expressions
• Sequences
• Propeliies
Boolean expressions define conditions that are evaluated in zero time. In general, Sys-
temVerilog allows any expression that produces a Boolean result to be used as a Boolean
expression (section 15.2). In addition, SystemVerilog provides system functions (e.g.,
$rose()) to extract relevant Boolean conditions from signals in the environment (section
15.2.5). Sequences are composed of Boolean expressions and sequence operators, and are
used to define a relationship between Boolean expressions across multiple time-steps. Prop-
erties are composed of sequences and property operators, and provide a true or false result
indicating whether the given property was maintained throughout the simulation runtime.
Properties
Sequences
Boolean Expressions
Sequences and properties have very different semantic meanings. The relationship
between sequences and properties is the same as the relationship between behaviors and
rules. Behaviors describe a manner of behaving or acting, while rules outline legal behaviors.
For example, a behavior for a reset signal may state that "the cpu reset signal stays active for
five clock cycles" while a rule about that reset signal may state that "the cpu reset signal must
stay active for at least five clock cycles." Rules often use some behaviors as qualifying con-
ditions for legal behaviors. For example, a rule may state that "if system reset becomes
active, then the cpu reset signal must stay active for at least five clock cycles." In this case,
the behavior "system reset becomes active" is used as the qualifying condition for the second
behavior.
Properties provide a mechanism for specifying rules about design behaviors described
with sequences. SystemVerilog sequences and sequence operators provide a powerful mech-
anism for specifying complex design behaviors, while SystemVerilog properties and prop-
erty operators allow legal relations to be specified between behaviors described by
sequences. In the cpu reset example, sequences are used to describe behaviors "system reset
becomes active" and "cpu reset stays active for five clock cycles," and properties are used to
specify the allowed relationship between these two behaviors.
In addition to this conceptual difference, SystemVerilog sequences and properties have
different evaluation models. For a given simulation trace, a sequence may match multiple
times in one or more evaluation cycles and fails only if it never matches. All the match con-
ditions computed for a sub-sequence are tracked as part of the evaluation and used to guide
the evaluation of the sequence containing that sub-sequence (section 15.3.2). Evaluation of a
property, however, produces only a true or false statement indicating whether the given prop-
erty was satisfied in any possible way. Consider a design having two reset signals. Assume
sequence 8 states that "either reset l or reset2 stays active for two to five cycles" and property
P states that "either resetl or reset2 must stay active for two to five cycles." Property P is sat-
isfied if either reset l or reset2 stay active for two to five cycles. This means that if one of
these conditions is observed, then the property can be considered as satisfied, and there is no
need to find out if the other condition is satisfied. However, if sequence 8 is used to describe
a more complex sequence, then all possible ways it can succeed must be found and consid-
ered.
The requirement to keep track of all possible matches for a sequence is illustrated in the
following example:
5f (Vl==10) ##1 [1:2] (Vl==20)
PI: 51
P2: 51 ##1 (Vl==30)
Trace Tr: (10,20,20,30)
ing the actual success of this property for trace Tr. Clearly, if sequence 51 fails for any trace
(e.g., (10,50,20,30)), then neither property P1 nor P 2 will match for that trace.
Boolean expressions, sequences and properties are described in the following sections
of this chapter.
Examples of valid Boolean expressions with different operand types are shown below:
( A && IB) /I A and B are bits
( cnt ==53) /I cnt is an integer
( arrayA == arrayB) /I arrayA and arrayB are 2-dimenslonal arrays
( A.X == B,Y) /I X and Yare integer members of structs A and B respectively
Boolean expressions, in the context of sequence and property specifications, are defined
by the following aspects:
• Allowed operand types
• Operators
• Sampling events
• Operand value sampling
• Sampled value functions
Sampled value functions are system provided functions that allow for writing condi-
tions based on changes in signal values, These topics are discussed in the following subsec-
tions.
I. A side effect, in this conte.\t. refers to a chang~ in the ,·alue of any data (l~ject in the em·ironment.
Boolean Expressions 375
Variables that return a valid operand type (i.e., variable types not in the previous list)
can be used in a Boolean expression. These variables, however, must be a static design vari-
able declared in programs, interfaces, clocking blocks, or tasks.
Function calls that return one of the valid types (i.e., types excluding those listed above)
can be used in a Boolean expression. However, the following semantic restrictions are
imposed on function calls used in a Boolean expression:
• Function arguments cannot be of type ref(const refis allowed)
• Functions should not contain any static variables
• Functions should not have any side effects (e.g., changing values in other scopes).
These restrictions are required in order to prevent side effects when a property is being
evaluated. Preventing side effects is needed since a property may get evaluated multiple
times in a single time-slot because of race conditions in its sampling event. And any side
effect in evaluation of a Boolean expression used in a property can, therefore, lead to unpre-
dictable behavior.
15.2.2 Operators
All operators that are valid for the operands types described in the previous section can be
used in Boolean expressions. However, operators that produce side effects when used in
expressions are not allowed. Disallowed operands are:
• Assignment operator (=)
• Increment operator ( ++)
• Decrement operator (--)
that simulation time-slot. It is, however, important to note that the current value of the sam-
pling event are used to decide when a Boolean expression is evaluated. This means that if the
sampling event of a property (and hence, Boolean expressions used in that property) makes
multiple transitions in a single simulation time-slot, it is possible to evaluate that Boolean
expression mUltiple times in a single time-slot.
Figure 15.2 shows how this approach provides a consistent behavior even when signals
used for evaluating a property change at the same time as the sampling event of a property. In
this figure, Boolean expression (@(posedge elk) elk) never evaluates to true. The reason is that
the current value of signal elk, which is set in the main scheduling loop of the current
time-slot, is used to detect the sampling event while the value of elk, sampled in the preponed
region before clock changed to 1, is used to evaluate the Boolean expression. Boolean
expression (@(posedge elk) lelk) evaluates to true for all sampling events because of the same
reason. The values of expression (@(posedge elk) A) at times 2, 3, 6, 7 and 8 show examples of
using variable values immediately before the sampling event to evaluate a property. This
example also shows the usefulness of the sampled values of variables used in a Boolean
I expressions, where this approach leads to predictable evaluations even when signals change
at the same time as the sampling event. Also, this approach is consistent with cycle-based
verification semantics where signal values are sampled before a clock edge (section 5.2).
11
elk
A
It is important that the sampling event for all sequences and properties are glitch free
and change only once at each simulation time-slot. Using the current value for evaluating
sampling events means that a sampling event with race conditions (e.g., signal elk changing
multiple times in the same simulation time-slot) leads to multiple evaluations of a sequence
or property in the same time-slot. In this case, the sequence evaluation engine assumes the
next sampling event at a next simulation time to have arrived, when, in fact, simulation time
has not yet advanced. Given that properties are usually described in telms of consecutive
occurrences of sampling events, any such misunderstanding of the actual arrival of the sam-
pling event can lead to errors in correct evaluation of a property.
limited to sequence and property specifications, and can be used anywhere a function return-
ing a Boolean value can be used. These system functions and their syntax are:
$sampled( expr)
$rose(expr)
$fell(expr)
$stable(expr)
$past(expr [, number_oCsampling_events! [, gating_exprJ)
Figure 15.3 shows examples of values returned for these sampled value functions. Sig-
nal elk is used as the sampling event, and row 2 shows when this sampling event occurs. The
value for signal A (row 3) goes through all transitions (0 ..... 0, 0 ..... 1, 1 ..... o, 1 ..... 1) both at the ris-
ing edge of elk and at the falling edge of elk, thereby providing examples for all possible cor-
ner cases. Row 5 shows the value returned by function Ssample(). Note that the value returned
for the first sampling event is x since initially, all variables are initialized to x. Rows 6, 7, and
8 show transition values computed for signal A. Rows 9, 10, and 12 show values returned by
function $past(). Row 9 shows values returned for sampled value of signal A that is delayed
by one sampling-event. Row 10 shows values returned for sampled values of signal A that is
delayed by two sampling events. Row 12 shows the effect of using a gating expression to
decide which past sampled value of signal A to return. Note that the Boolean expression
shown in row 12 is still evaluated for every occurrence of the sampling event shown in row
2, but the number of sampling events counted into the past is taken from row II.
Boolean expressions are the building blocks of sequences and properties. The use of
Boolean expressions in writing sequences and properties are described in the following sec-
tions.
15.3 Sequences
----_..
Sequences specify behaviors that span zero or more simulation time. Sequences are con-
structed by using the ~lel1Ce delar opemtor (##) to specify what Boolean expressions must
378 Property Specification and Evaluation Engine
l_c\_k_ _ _ ._ _ _ _
2 @(posedge elk)
tl'] nnh nnnII
Sampling Event
3 A I -- I I
----_._-------_._-----
6 @(posedge ell<) $ros.(A)
.._.F__. _F.._-- _
F
....._--
T
.-.---.~-
F F
---.- F
_.._-- _._- .._-_. -'--
T F F
_9~@~(~posed_:g~e_el~k)~$~P-_~~~~)______~·--~--·~-~----r--~--~--+_--r---r---r----
10 @(posedg.elk) $p.st(A, 2) x x x 0 0 1 1 0 0 1
-..--.. .----.-..-..---.-------.--.-.-.. .---.-..."--.--- .----.-.. . ..--.-- -.. ---.-- .-.--.. -.....-.. .- . --'-'--" -.---.-- --.. . -. r--- .--..-_..-
11 @(po ••dgeelkiffelk...!!at.)
--. ------r-.- _ . . 1--- ._--'--...- ...... 1---- .---- ..---..- ---.. '------ --.....-.-....-
12 @(posedge elk) Spa"t(A, 1, elk...!!_te)
evaluate to true across consecutive sampling cycles. Evaluation cycles for a sequence are
defined by the sequence sampling event. For example, sequence (A ##1 e) indicates a condi-
tion where A is true in a given cycle and e is true in the following cycle.
A linear sequence is the simplest fom1 of a sequence and specifies Boolean conditions
that must evaluate to true in consecutive cycles (e.g., A ##1 B ##1 C). A linear sequence is said
to match when all Boolean expressions, as ordered by the unit delay operator, are true in their
corresponding cycles. Note that a sequence of type (A ##3 B) is also a linear sequence, since
this form indirectly states that in the three cycles from A to e, no Boolean condition is neces-
sary to be true in order for this sequence to match. In other words:
A ##3 B is equivalent to: A ##1 (1) ##1 (1) ##1 B where (1) matches any condition
All other sequence operators are essentially provided as efficient means of specifying
long linear sequences or composite sequences where a single expression represents a group
of linear sequences. For example, the delay repeat operator (e.g., A ##60 B) helps in defining
a very long linear sequence without explicitly writing the condition for each cycle. An "or"
sequence operator helps in writing one sequence expression that represents multiple
sequences. A composite sequence may represent zero, tinitely many, or infinitely many lin-
ear sequences. Table 15.1 shows examples of composite sequences each representing one or
more linear sequences.
The sequence operator that is used to create a composite sequence defines how the
results of individual sequences are combined in order to compute the match or fail condition
Sequences 379
Number of Linear
Composite Seq uence Linear Sequences Represented Sequences
Represented
A l·uJ ThIs IS an empty sequence. Doesn't match any trace. 0
A##I B A##I B 1
A##I B
A ##[1:3] B A ##1 (I) ##1 B 3
A ##1 (I) ##1 ('true) ##1 B
A##I B
A ##1 (I) ##1 B
A ##[1:$] B Infinite
A##1 (1)##1 ... ##1 (1)##1 B
for the composite sequence. The evaluation model of sequences is described in section
15.3.2.
5 initial begin for (int i = 0; i <= 10; i++) #1 clk = !clk ; end
6
7 sequence 51(AA);
8 AA ['2:4];
9 endsequence
10
11 sequence 52;
12 bit [7:0]local_var;
13 @(negedge clk) (51 (b), locaLvar = dataJn) ##5 (data_out == local_var);
14 endsequence
15
16 sequence 53 (5_CLK, AA, BB, CC, min1, max1, max2);
17 @(posedge 5_CLK) AA ##[min1 :max1] BB ##[1 :max2] CC;
18 endsequence
19
20 sequence 54;
21 53(clk, e, 51(a), (a==b), 1, 10, "$");
22 end sequence
23 end module
The following observations apply to the above examples. These topics are covered in
more detail in the following subsections.
• Sequences are specified by using sequence operators to combine Boolean expres-
sions and previously declared sequences (lines 8, 13, 17, 21).
• Sequence sampling events can either be specified explicitly or derived from the con-
text of where the sequence is used. In the above example, no sampling event is speci-
fied for sequence 8 1, As such, 51 derives its sampling event from its instantiation
context. The sampling event for sequence 8 1 is the negative edge of elk when 8 1 is
used in sequence 8 2 (line 13), positive edge of clk when 8 1 is used in sequence 8 3 (line
17), and positive edge of elk when 8 1 is passed to 8 3 as an actual argument to its
instance in sequence 8 4 (line 21).
• Sequence evaluation state can be one of "started", "matched", or "failed".
• Formal arguments can be defined for a sequence declaration in order to customize an
instance of a sequence to its instantiation context. Formal arguments of a declared
sequence can replace the following elements of a sequence declaration:
• Identifier: Identifier a as actual for formal argument AA of 53
• Sequence: Sequence 8 1(a) as actual for formal argument BB of 53
• Expression: (a==b) as actual for formal argument cc of 83
• Event control expression: elk as actual for formal argument 5_CLK of8 3
• Repeat range: Passing 1, 10, and "$" for formal arguments of 53
• Sequence declarations can include procedural code (including function calls) that
will be invoked when a given part of sequence matches (e.g., invoking (loeal_var =
data_in) when sub-sequence (8 1(b) on line 13 matches)).
• Local variables which persist throughout evaluation of a sequence can be used in a
sequence specification. These local variables simplify the transfer of information
from one part of a sequence to another (storing the value of data_In to be later used in
a Boolean expression on line 13).
• Any identifier used in a sequence declaration that is not a local \'ariable or a formal
argument is resolved according to the scoping rules of the block in which the
sequence declaration is placed. For example, variables data_in and data_out are not
local variables of sequence 52' As such, the variables used are the ones in the scope
Sequences 381
where sequence 52 is declared (line 3). This means that even if this sequence is used
in a property in a ditferent module (using its hierarchical name), variables data_in and
data_out from this module (line 3) are used.
• Formal arguments are type independent. As such, a sequence instance actual argu-
ment can have any type as long as the resulting sequence definition is legal.
Details of sequences are described in the following subsections.
time
~O~~1~~2~~3~-=4~__5~~6__~7__~~9
f'Aatq,ed ':;:valIJatiqn Tt"(reaq
time
7 :8 9
Fai1e:d Ev;aluation Threfld
Figure 15.4 Abstract Evaluation Model of Linear Sequence
Evaluating a composite sequence is more involved. Figure 15.5 shows the abstract eval-
uation model of a composite sequence. The sampling event cycles are shown on top, along
with their corresponding cycles. The overall result of sequence evaluation is shown on top,
where evaluation is started at cycle 0 (first cycle this evaluation is started) and in this exam-
ple ends at time 10. The evaluation result of a composite sequence is computed from the
results of evaluation threads for its sub-sequences. The operator used to create the composite
sequence from its subsequences defines how the overall evaluation of the sequence is
derived from the results produced by its evaluation sub-threads. For example, composite
sequence (5 = 51 or 52) has two evaluation sub-threads corresponding to sequences 51 and 52'
Sequence 5 is then said to match any time one of its sub-sequences matches.
In figure 15.5, the match or fail status of the overall sequence (shown on top) does not
depend directly on the match or fail condition of its sub-threads, since the actual results will
depend on the operator used to combine the sub-threads. The presentation of sequence oper-
ators in the following section wiIl describe each operator based on hO\\' it combines the
results of its evaluation sub-threads.
The following aspects define the abstract evaluation model of a composite sequence:
382 Property Specification and Evaluation Engine
Composite Sequence
Evaluation Result ------;?----\--~-~-L-~---.:b_~:____l,:__~-~~ time
Sampling cycle. - - - F-----T:..---,::....-....;::.--+--+-,:;--i---,.=----,::..-....;.=..j
Evaluation sub·thread 1 - -
Evaluation sub·thread 2 - -
~~~~~~~~=4~
• Sequence match condition: How match or fail results of evaluation sub-threads are
combined to produce match/fail results for the sequence
• Sequence termination condition: when evaluation thread is terminated
Sequence operators (section 15.3.3) will be defined, based on this view of the sequence
evaluation model.
Only one match is sufficient for a property to succeed. As such, P1 is satisfied if either
51 or 52 produce a match, and it is not necessary to search for a second match. Therefore, only
the first match of sequence (51 or 52) is relevant when considering Pl' On the other hand, both
matches for sequences 51 and 52 are important in evaluating P2' The reason is that either one
of (51 ##153) or (52 ##153) may match, and ignoring one of the matches of the (51 or 52) may
lead to missed matched results.
Figure 15.6 shows a view of the evaluation flow for sequence (5: 51 ##1 52) started at
time 22, where sequences 51 and 52 are composite sequences. In this example, evaluation of
sequence 51 is started at time 22, producing matches at times 24, 28, and 29. Note that these
matches are produced because of the description of sequence 51 which is not shown in this
example. For every match of sequence 51, a new evaluation thread is started for sequence 52
in the cycle after the match for 51 occurred (using ##0 would start the evaluation for 52 in the
same cycle that the match for 51 occurred). An evaluation sub-thread is terminated ifit fails.
but such failure is not reported. The evaluation of sequence 5 started at time 22 fails if all
Sequences 383
evaluation sub-threads for evaluating 52 fail. In this example, if (51 ##1 52) is a property defi-
nition, then the property succeeds on the first match produced by evaluation thread for 52 (at
time 27). It~ however, this sequence is used in building a larger sequence (e.g., (51 ##152) ##1
53) then all matches produced by sub-threads started for evaluation of 52 (times 27, 31, 32, 33,
35) will lead to the start of an evaluation sub-thread for sequence 53.
2 23 24 25 26 ~7 28 29 .0 31 32 33 34 3
I
tim e
V r-...
0 1 2 3 4 5 6 71
~ I
/ ~
0 1 ~. I
I ~ I
V
0 1 2 3 4
\
51
i
~
1/
I
0 1 2 3 4 5 6 71 ti;j,e I \
~
~ A
Success L/_----'-T----'-I\~~ Fail ;,--,i ~
Multiple successes per evaluation are possible Evaluation fails only once if it never succeeds
Note that the specified delay value should be a constant that results in an integer equal
to or larger than zero at compile time. The first three forms of the delay operator, as shown
above, represent the same form of a simple delay repeat operator with different means of
specifying a delay value. The last form shows a range delay repeat operator, which is a
shorthand notation for multiple linear sequences. Range delay repeat operators can be speci-
fied in terms of the simple form of delay, and the "or" sequence operator as follows:
8 1 ##[start:endl 8 2 (8 1 ##star! 8 2 ) or (8 1 ##(star!+l) 8 2 ) or ...
or (8 1 ##(end-1) 8 2 ) + (8 1 ##(end) 8 2 )
Examples of simple and range repeat operators and their equivalent sequences are
shown below:
8 1 ##3 8 2 8 1 ##1 (1) ##1 (1) ##1 8 2
where (1) is a Boolean expressions always evaluating to TRUE
8 1 ##[1 :2] 8 2 (8 1 ##1 B) or (8 1 ##1 (1) ##1 8 2 )
where match in either sequence will match the main sequence
A simple delay repeat operator results in a linear sequence and as such, the evaluation
model discussed in section 15.3.2 applies to sequences defined using the simple delay repeat
operator. A range repeat operator can be restated as the grouping of multiple linear sequences
using the "or" sequence operator. As such, the evaluation model of sequence repeat operators
follows that of the sequence "or" operator.
Sequence gota repeat operator specifies a sequence that matches a trace in which B
occurs, possibly not consecutively, a total of n times. This sequence matches only at the cycle
where the nth occurrence of B matches. This operator can be applied only to Boolean expres-
sions. The following shows examples ofthis operator:
matches at the end of following traces:
(8,.8,.8,)
(8,.8 2 .8 1.8 1 )
(8,.8 2 .8 1 .82 .8 1)
(8) [->1 :2] is equivalent to sequence ( 8 [->1]) or (8 [->2]) )
It is important to note that the non-consecutive and goto repeat operators produce differ-
ent result only when used as part of other sequences or properties. Consider the following
properties:
1 property p1;
2 @(posedge clk) a [=3];
3 end property
4
5 property p2;
6 @(posedge clk) a [->3];
7 end property
8
9 property p3;
10 @(posedge elk) a [=3] ##1 b;
11 end property
12
13 property p4;
386 Property Specification and Evaluation Engine
Consider trace (a, c, a, a, c, c, b) occurring during time interval (1, 2, 3, 4, 5, 6, 7). Proper-
ties P1 and P2 both succeed at time 4, even though property P1 uses the non-consecutive repeat
operator and property P2 uses the goto repeat operator. The reason is that the sequence
expression for both these properties match time 4, and a property evaluation is terminated
after its first match. Property P3 succeeds at time 7 when b is true, but property P4 fails since
it requires b to occur immediately after the last cycle where a occurred.
Conceptually, sequences created by sequence repeat operators can be represented as a
group of linear sequences combined with the "or" sequence operator. As such, the evaluation
model of sequence repeat operators foHows that of the sequence "or" operator.
AND Sequence
Evaluation Result
Sampling cycles
Evaluation Thread S,
Evaluation Thread S,
OR Sequence
Evaluation Result time
Sampling cycle.
Evaluation Thread S,
Evaluation Thread 8 2
~~~~~~~~~~~
8 = 8 1 intersect 8 2
Match condition: Sequence 8 matches when both evaluations of 81 and 8 2 match in the
same cycle. An evaluation of sequence 8 produces only one match in any cycle, regardless of
how many times its operands match in that cycle. An evaluation of sequence 8 may, how-
ever, match multiple times across its lifetime. If sequence 81 matches nl times and sequences
8 2 matches n2 times, then an evaluation of sequence 8 produces at most mln(nl1 n2) matches,
and possibly no matches if matches from 81 and 8 2 do not overlap. The exact number of
matches depends on the ordering between matches from sequence 81 and matches from
sequence 8 2 ,
Termination condition: Evaluation of sequence 8 terminates immediately if any of the
evaluation sub-threads completes with either a fail or match condition. The reason is that it is
no longer possible to produce a match if one of the sub-threads terminates.
Figure 15.9 shows the evaluation model of the sequence "intersect' operator. The eval-
uation threads for each sub-sequence is shown inside the main evaluation box for sequence
8. As shown in the figure, sequence 81 matches at time 3. Sequence 8 2 matches at times 2 and
8. Sequence 8 fails when evaluation sub-thread for sequence 8 2 terminates. In doing so, eval-
uation of sequence Sl is also terminated even though it is not completed yet.
AND Sequence
Evaluation Result
Sampling cycles
Evaluation Thread S 1
Evaluation Thread 52
This operator is used to specify that a Boolean condition must be true throughout the
evaluation of sequence 81' This operator is a composite operator and can be specified equiv-
alentlyas:
8:: (8) ['0:$] intersect (8 1 )
The first part of this expression matches for zero or more occurrences of Boolean
expression B. Sequence 8 matches if sequence 81 matches while Boolean expression B has
been continuously valid since the evaluation started. Sequence 8 either fails or matches the
same number of times that sequence 81 matches.
Sequences 389
o 2 3 6 7 3
a result of
of terminating
81, these two
evaluations
tQr,"';n,<it".rl at time
1S.:.1_§)!~~~m. ~~.~~.~~~J!!f?E~~~~~~ ...._... _. . ______. ._. . _._. ___.__ . . . ___. _._. __._
In SystemVeri log, design rules are described using the property construct. Properties are cre-
ated by using property operators to combine sequences and properties. Both sequences and
properties used to create a larger property can be either in-lined (i.e., written explicitly) or
instantiated (instance of a named sequence or property). The result of a property evaluation
is a true or false statement indicating whether or not the given property succeeded starting at
a given point in time.
SystemVerilog property declarations are used for:
• Checking whether or not a sequence produces any match.
• Combining the results of smaller properties using property Boolean operators
• Specifying conditions for when a property should be evaluated
A SystemVerilog base property is one that is expressed with a single sequence (note
that this sequence may in fact represent a very complex behavior)2. Property declarations are
then used to combine the result of base properties using Boolean operators and to specify
conditions for when a given property should be evaluated.
:. :\ base propeny declaration is one lhat does not cllnlain any properly specific operators (I.e., not, impli-
cationl and does I1l't spe.:it~·ll1ullipk docks for clauses of Boolean operators (i.e., alld, or).
SystemVeriiog Properties 391
Sequence 51 and property P1 have the exact same definition (lines 4, 5). However, even
though property valid_prop (line 7) is a valid property, properties invalld_proP1 (line 11) and
invalid_proP2 (line 15) are not valid. The reason is that the implication operator (line 12)
accepts only sequences as an antecedent clause (section 15.4.3). Also, properties cannot be
combined using sequence operators, therefore sequence operator ##1 cannot be used to com-
bine property P1 with sequence 51 (line 16).
Named properties may include a disable clause, which contains a reset expression. If
the reset expression evaluates to true between the time a property evaluation is started and
when its evaluation is completed, then the property is assumed to be true. The reset expres-
sion is evaluated for every independent evaluation of a property (section 15.4.2). In the
above example, property valid_prop includes a disable clause with reset expression
"reset==1'bO". Disable clauses cannot be nested. In other words, a property, after collapsing
all its named properties, cannot contain more than one disable clause.
Named property instantiation refers to using the name of a declared property in writing
other properties. Such usage is equivalent to replacing the actual declaration of a named
property with the instance of that property while replacing that property's formal arguments
with the actual argument provided in its instantiation.
Sequence evaluation is used to check whether or not a sequence matches at least one
time. A base property, used to evaluate a sequence, is declared by writing that sequence
in-line or using an instance of a named sequence in the property expression. Consider a prop-
erty (P1= 51)' The evaluation of property P1 consists of evaluating first_match(5d where prop-
erty evaluation succeeds if sequence 51 matches at least one time, and fails if sequence 51
produces no match. The evaluation of a base property started at a given cycle continues until
its sequence fails or matches once.
Figure 15.11 shows the abstract evaluation model ofa base property (evaluating a single
sequence). In this figure, evaluation of sequence 51 is shown to match twice at times 5 and 9.
392 Property Specification and Evaluation Engine
Evaluation of property P1 (defined as sequence 51), however, starts at time 0 and ends at time
5 when the first match of sequence 51 is observed. When evaluating property P1, the evalua-
tion of sequence 51 is terminated after its first match is observed, since the property has
already succeeded. Property P1 would fail if sequence 51 does not match at all. A property
evaluation succeeds or fails only once. This is in contrast with sequence evaluation, where a
sequence may match multiple times. As such, a property evaluation thread is identified by its
start time, its end time, and its final success or fail result.
time
o 2 3 4 6 7 8
~equ~nce ~1
o 2 3 4 6 7, 8
Prope~y P1:
P = P1 or P2
Evaillation Result
Sampling cycles
time
P1 Evaluation sub-thread
P2 Evaluation sub-thread
~~~~~~~~=+~
As described in this section, property evaluation can be described in terms of the fol-
lowing aspects:
• Property evaluation start time
• Property evaluation completion time
• How property results are combined to derive the result for a composite property
Implication and Boolean property operators (section 15.4.3) will be described in terms
of these aspects of property evaluation.
Figure 15.13 shows an example of how multiple evaluation threads of a property started
at different cycles can lead to matches or fails at different times during the simulation run-
time. Rows 5 through II show evaluation threads started at times 1 through 7 for each new
sampling event of the property. As can be seen in this figure, at each sampling event, some
evaluations may fail, some may match, and some may be in the "evaluation started" phase.
For example, at time 5, the evaluation started at time 1 succeeds, and evaluation started at
time 5 fails. At time 11. both evaluations started at times 7 and 11 fail.
Success or failure of a property at a sampling event is the result of a combination of all
e,·aluations that complete in that sampling event. This means that in each cycle, only one
394 Property Specification and Evaluation Engine
1 elk
-----------------L~~~-+_~~_r~+_~~_+~~--
3 A
4 B
·-·.. -·--·-·---·-----·-·-------------~~---+-~--_r~-+--~r~_+--~---
10
11
OVerall Result F
o 2 4 6. 10
result is reported for all property evaluation threads, even if multiple threads of evaluation
fail or succeed in that cycle. The following rules apply to this evaluation:
• A property succeeds in a cycle, if and only if no property evaluation thread fails in
that cycle and one or more property evaluation threads succeed in that cycle (time 13
in figure 15.13).
• A property fails in a cycle if and only if one or more of its evaluation threads fail in
that cycle (time 5 in figure 15.13).
• In a given cycle, a property is in the "evaluation started" state if none of its evaluation
threads produce a result in that cycle (time 7 in figure 15.13).
The example in figure 15.13 is based on a linear sequence, producing a single success or
fail result for each evaluation thread. It is important to point out that any property, regardless
of its complexity, produces only a single success or fail result, and as such, the model above
can be used to compute the overall result for that property evaluation.
The following program segment based on values shown in figure 15.13, produces a
match at time 13, and fails at times 3, 5, and 11.
1 sequence s_abb;
2 a ##1 b ##1 !b: /I a linear sequence
3 endsequence
4
5 property p_abb ;
SystemVerllog Properties 395
An empty match refers to a match for an empty trace 3. An empty match is usually used
to specify cases where a sub-sequence can appear zero or more times in a larger sequence. As
such, the ability to define an empty match is introduced in order to allow for more concise
sequence descriptions. Consider the following example:
S1: (a) or (a ##1 b)
Si (a ##1 b ['a]) or (a ##1 b)
S3: a ##1 b['0:1]
Admits Admits
Sequence Type Example
Empty Match Non-Empty Match
Stnctly Degenerate no no A mtersect B [*2]
Degenerate yes no A [*0]
Non-Degenerate yes yes A [*0:2]
Strictly Non-Degenerate no yes A [*1:2]
3. A simulation trace is a set of signal values in consecllti\'e cycles (e.g .• {a.b.c,a,b}). An empty trace
refers to the absence of any cycl~s. For ~xample. ifl l. T~. and T3 are three simulation traces and trace T2
is an empty trace. then [race :TI:r~.T;}lc()nclltellati()1l of these three traces) is the same as trace {TI>13}.
396 Property Specification and Evaluation Engine
SystemVeriiog enforces the following restrictions for using sequences in writing prop-
erties:
• A sequence used to define a base property should be strictly non-degenerate.
• A sequence used as the antecedent of an overlapping implication (1-» should be
non-degenerate or strictly non-degenerate. In other words, it has to admit a
non-empty match.
• A sequence used as the antecedent of a non-overlapping implication (1=» should not
be strictly degenerate. In other words, it has to admit at least one match even ifit is an
empty match.
These operators are described in the following sections. The description of these opera-
tors is based on the property abstract evaluation model presented in section 15.4.2. Given a
composite property P created by using a property operator to combine operand properties
(e.g., in (P = PI and P 2 ) property operator "and" is used to combine operand properties PI and
P 2 to create property P), and assuming that a property evaluation thread is started at the cur-
rent cycle for property P, each operator is described in terms of:
• Start time of property evaluation sub-threads for operands properties P1, P 2.
• End time of evaluation thread for property P
• Evaluation result for property P as a function of evaluation results for its operands
SystemVerilog Properties 397
when a reset condition is detected. Similarly, a property evaluation for a cache coherency
protocol may have already been started when the cache is flushed. In both cases, the result of
the already started evaluation is no longer relevant. SystemVerilog provides the disable
clause (section 15.4.1) to handle such situations. The evaluation ofa property is terminated
when the qualifying condition for the disable clause is satisfied any time after the evaluation
of that property is started.
A more commonly occurring situation, however, is when the success or failure of a
property is meaningful only when a qualifying condition has already occurred. Such circum-
stances can be identified anywhere from properties defined for individual wires of a design
all the way to properties defined for system level behaviors. For example, at the bit level, the
address lines of a bus must hold valid binary values only when the bus is in a read or write
mode. At the system level, the burst size of a memory bus should be of a given size only
when the device was initially configured to operate with that burst size. SystemVerilog pro-
vides the implication operators for specifying such conditional design behaviors.
Implication operators introduce the notion of vacuous success of a property. A property
that succeeds vacuously is a property whose qualifying condition is not met and, therefore,
the evaluation result of the property attached to the qualifying condition is found to be irrele-
vant to the correct operation of the design. A vacuous success of a top level property (e.g., a
property at the top level of an assert or cover statement) is treated as though that property
was never evaluated. For example, if a property used in an assert statement succeeds vacu-
ously, then neither the pass nor fail statements of that assert statement are executed. Vacuous
success of properties that are used as operands to form a larger property are treated as real
successes. For example, a vacuous fail of property P1 in property P defined as (P: not (P 1))
results in fail of property P.
SystemVerilog provides the following implication operators:
• Overlapping implication: 1->
• Non-overlapping implication: 1==>
• if-else operator
Property over/aoping implication operator has the following form:
P: S1 1-> P 1
The left-hand side operand of the overlapping implication operator is called the ante-
cedent. Only sequences can be used as the antecedent expression. The right-hand side of the
implication operator is called the consequent. Antecedent is the qualifying condition, while
the consequent is the property to be evaluated. The operation of overlapping implication
operator is defined as follows:
• Sub-thread start time: Evaluation sub-thread for sequence 51 is started when evalua-
tion thread for P is started. An evaluation thread for property P1 is started/or eve!],
match of sequence 51 and in the same cycle that the match was produced.
• End time: Evaluation of property P is completed when either sequence 51 fails, or
when evaluation of sequence 51 completes and every eyaluation sub-thread that was
started for property P 1 completes.
• Result: E\"aluation of property P succeeds vacuously if sequence 51 fails. Otherwise.
evaluation of property Poi fails if any of the evaluation threads started for property P 1
fail. Otherwise. e\"aluation of property Poi succeeds.
SystemVeriiog Properties 399
As with the overlapping implication operator, only sequences can be used as the ante-
cedent expression. The operation of a non-overlapping implication operator is defined as fol-
lows:
• Sub-thread start time: Evaluation sub-thread for sequence 8 1 is started when evalua-
tion thread for p is started. An evaluation thread for property P1 is started/or every
match of sequence 8 1 and in the next cycle that the match was produced.
• End time: Same as that for overlapping implication
• Result: Same as that for overlapping implication
The if-else operator has the following form:
P: if (8) P 1 [else P2] /I else P2 is optional
Only a Boolean expression is allowed as the if-clause. The operation of an if-else oper-
ator is defined as follows:
• Sub-thread start time: Evaluation sub-thread for property P1 is started when evalua-
tion thread for P is started and only if Boolean expression B evaluates to true. Evalua-
tion sub-thread for property P2 is started when evaluation thread for P is started and
only if Boolean expression B evaluates to false.
• End time: Evaluation of property P is completed when evaluation of either property
P1 or property P2 , whichever was started, is completed.
• Result: Evaluation of property P succeeds if B is true and property P1 succeeds or B is
false and property P2 succeeds. Otherwise, evaluation of property P fails.
Implication operators are different from other property operators in that they include an
implicit requirement that the subsequent property should succeed for every match of the
antecedent sequence. This is in contrast with base property evaluations where only the first
match of a sequence leads to the property succeeding. This behavior of the implication oper-
ator can be leveraged to gain information about all matches produced by a sequence. Con-
sider the following program segment:
In this example, property all_matches is defmed to print the time for all matches of
sequence seq for evaluations started at time tt lIines 8-10). The declaration of this property
takes ad\'antage of the overlapping implication operator. The first antecedent is used to pro-
duce a vacuous success for any time other than time tt. This means that property evaluation
400 Property Specification and Evaluation Engine
continues only for a start time of tt. If the current simulation time is the same as tt, then the
evaluation of sequence seq is started in the same cycle (because of the use of an overlapping
implication operator). Sequence seq is the antecedent of the next nested implication operator.
As such, for every match of sequence seq, the consequent property is evaluated. In this case,
the consequent property always succeeds, since it is simply a constant value 1, and upon suc-
cess, displays the current simulation time. Note that since an overlapping implication opera-
tor is used, the success time of property is the same as the cycle where the corresponding
match for sequence seq was produced. Property all_matches is used on line 12 to print the
time at which a match occurred for all matches of sequence ss started at time 4.
SystemVerilog provides a rich set of sequence and property operators. These operators are
meant to provide an intuitive set of constructs for specifying complex behaviors. This means
that some operators can be expressed in terms of other operators. This is similar to Boolean
expressions where an XOR operation can be defined in terms of AND, OR, and NOT opera-
tions, but is provided as an operator since an intuitive behavior can be attached to an XOR
operation.
Base sequence and property operators are shown in table 15.5. All other sequence and
property operators can be defined in terms of the operators shown in this table.
Tables 15.6 and 15.7 show derived sequence and property operators and their imple-
mentation in tenns of base operators.
When learning SystemVerilog sequences and properties, the focus should be on learn-
ing how to use each property in an intuith'e \\'ay. However. as behavior complexity grows.
understanding the operation of base operators allo\\'s for better insight into how each derived
operator is expected to behave.
Multi-Clock Sequences and Properties 401
Sc1 and Sc2 are singly clocked sequences, Sc12 is a multi-clocked sequence, and Pc12 is a
multi-clocked property (using different edges of signal elk).
The last clocking event in a cascaded set of clocking events overrides all previous
clocking events. Sequence Sc1 in the following example is in fact a singly clocked sequence
whose behavior is equivalent to sequence Sc2'
ScI: @(posedgeclk3)@(posedge clk2) @(posedge clk1) SI
Sc2: @(posedge elk1) SI
System Veri log imposes restrictions on how clocked and multi-clocked sequences and
properties can be created and combined using operators. All such restrictions are specified in
order to enforce a set of semantic requirements on multi-clock sequences and properties.
The following section describes the semantic requirements of multi-clock sequences
and properties, and the resulting restrictions on how sequence and property operators can be
used.
462 Property Specification and Evaluation Engine
The second requirement affects how sequences and properties can be concatenated.
Consider the following examples where S1 and S2 are un-clocked sequences:
Scf @(clk 1) S1
Sc2: @(clk2) S2
SScf Sc1 ##1 Sc2 Illegal
SSc2: Sc1 ##0 Sc2 lIillegal
PPc1: Sc1 1-> Sc2 lIi11egal
PPd Sc1 1=> Sc2 Illegal
PPc3: @(clk1) S1 1-> (@(clk1) S3 ##1 @(clk2) S2) I/Iegal
In the above examples, SSc2 is not a valid multi-clocked sequence. The reason is that in
using the fusion operator (##0), the first evaluation cycle of sc2 overlaps the last evaluation
cycle of sch requiring that the clocking event for the last cycle of sc1 to be the same as the
clocking event for the first cycle of Sc2' And this requirement is violated since sc1 and sC2 use
different clocking events. Similarly, property PP c1 is an invalid multi-clocked property. The
reason is that in using the overlapping implication operator (1-», the first evaluation of
sequence Sc2 starts in the last evaluation cycle of sequence sc1 and this violates the concate-
nation requirement since sc1 and sc2 use different clocking events. Property PP c3 is, however,
a legal multi-clocked property since the consequent clause is a valid multi-clocked sequence,
and also the sampling event of the first cycle of consequent (i.e., elk 1) is the same as the
clocking event of the last cycle of the antecedent (elk 1).
The third requirement affects how repeat operators can be used in concatenating
sequences. For example, assuming sequences sc1 and Sc2 are singly-clocked sequences with
different clocking events, sequence Sc defined as (SC1 ##1 SC2) is an invalid sequence if
sequence sc2 permits empty matches (e.g., Sc2 = A[*O:1]). In this case, upon an empty match of
Multl.Clock Sequences and Properties 403
sequence Sc2' it is not clear whether the last cycle of Sc ends on the clocking event for
sequence Sc1 or the clocking event for sequence Sc2.
The semantic requirements discussed above impose restrictions on the clocking proper-
ties of the operands to sequence and property operators. These restrictions are summarized in
table 15.8.
If only the end time of a sub-behavior is relevant in describing a more complex behav-
ior, it is a good practice to separate the evaluation of this sub-behavior from the evaluation of
the more complex behavior. This can be accomplished by using either SystemVerilog's pre-
defined sequence methods (i.e., ellded(), matchedO. triggeredO)' or by using the action
block of a property that evaluates the sub-behavior to set an appropriate flag. In either case,
the effect of this separation is that the success of failure of a sub-behavior can be treated as a
Sequence and Property Dictionary 405
condition that takes zero time to evaluate, thereby reducing the complexity that must be dealt
with in writing sequences.
Condition B occurs within M cycles from now and then holds for at least N cycles
This behavior states that the first occurrence of condition B is within the next M cycles and it
holds for N consecutive cycles after its first occurrence.
Sequence:
(1) ##[O:M] (B)[*N]
Condition B occurs [M 1:M 2] cycles from now and then holds for [N1:N21 cycles
This behavior states that even if condition B occurs in the first M1 cycles, it also occurs within
M1 to M2 cycles and after this occurrence, it holds for N1 to N2 consecutive cycles.
Sequence:
(1) ##[M1 :M2] (B)rN1:N2)
Condition B occurs first within [M1:M2] cycles and then holds for [N 1:Nz] cycles
This behavior states that condition B does not occur in the first M1 cycle and it does occur
within M1 to M2 cycles, and after this occurrence, it holds for N1 to N2 consecutive cycles.
Sequence:
liB) rM1:M2] ##1 (8)[*N1:N2]
Condition B1 occurs within the next N1 cycles, and within N2 cycles before B2 occurs
This behavior states that conditions B1 and B2 both occur but condition B2 does not occur
before condition B1 • The implementation shown here provides a closed range where condi-
tions B1 must occur within N1 cycles and condition B2 must occur within N2 cycles after con-
dition B1 occurs.
Sequence:
!(82) [*0:N1] ##0 (81) ##[1 :N2] (82)
Condition B1 occurs before and holds until N cycles after condition B2 occurs
This behavior states that both conditions 8 1 and 8 2 occur, and that B1 occurs before condition
B 2, and that once 8 1 occurs, it holds until N cycles after condition B2 occurs.
Sequence:
(81) [*1 :$] ##0 (82) ##1 (B1) [*N]
Conditions B1 and B2 occur exactly N times each within the next M cycles
This behavior states that conditions B1 and 8 2 each occur N times within the next M cycles. It
does not, however, require that the occurrence of each condition be in any specific order or
that the occurrences of each condition be consecutive.
Sequence:
«1 )[*M]) intersect «(81 )[->N] ##1 !(81 )[0:$]) and «B2)[->N] ##1 !(82)[0:$]))
This seemingly complex property is easily implemented by using the appropriate imple-
mentations for the qualifying and qualified behaviors, as shown in section 15.7.1, for ante-
cedent and consequent clauses of the property implication operator.
Examples of common design properties and their corresponding implementation using
the SystemVerilog property construct are shown in the following subsections. Given these
implementations, many design properties can be expressed by the combination of sequences
shown in section 15.7.1 and property implementation shown in this section.
If condition 8 has occurred, then evaluation of behavior 5 started in the next cycle after
B occurred must succeed
property prop 1;
@(posedge elk) disable iff (reset) B 1=> S;
endproperty
If condition B has occurred, then evaluation of behavior 5 started in the same cycle that
8 occurred must succeed
property prop1;
@(posedge elk) disable iff (reset) B 1-> S;
endproperty
If behavior 5 succeeds, then condition B must occur in the same cycle that 5 succeeded
property prop1 ;
@(posedge elk) disable iff (reset) S 1-> B;
endproperty
If behavior 51 succeeds, then evaluation of beha\'ior 52 started in the next cycle after 51
410 Property Specification and Evaluation Engine
If behavior 51 succeeds, then evaluation of behavior 52 started in the same cycle that 51
succeeded must also succeed
property prop1;
@(posedge clk) disable iff (reset) 81 1-> 82;
end property
This property can be used as follows in an assertion statement for a previously defined
named sequence 5:
debug_assert: assert property (
@(posedge elk) all_matches_starting_between(ss, 2, 40)
);
The clocking event specified for the assert statement should be the same as the clocking
event for the first cycle of sequence ss.
Print evaluation start time of all matches for sequence 5 that are matched in time range
[T1:T21
This property is used to print the evaluation start time of all matches produced between time
T1 and T2 for a named sequence 5.
property all_matches_produced_between(seq, t1, t2);
time start time;
($time <=t2, start_time = $time) 1-> seq
1-> ($time>=t1 && $time <=t2) 1-> (1, $display(start_time, $time));
endproperty
The clocking event specified for the assert statement should be the same as the clocking
event for the first cycle of sequence ss.
CHAPTER 16 Assertion-Based
Verification (ABV)
An important observation about functional verification is that design properties that must be
verified remain the same as the design implementation progresses through different levels of
abstraction, regardless of what verification tools and methodologies are used to carry out the
verification tasks. Regardless of design flow (top-down or bottom-up), properties that must
be verified accumulate. For example, in a top-down design style starting from the architec-
tural specification, a property that must be held at the architectural level, must still be main-
tained when the detailed block level design is created. And in a bottom-up design style where
individual blocks are created first, a micro-code design implementation property that had to
be maintained during block level design must still hold when blocks are combined to create
the complete system.
These observations lead to the straightforward conclusion that the use of a robust and
practical mechanism for specifying, modeling, and collecting such properties throughout the
design process leads to immediate gains in verification completeness (i.e, all scenarios are
verified), correctness (i.e., each scenario is verified correctly), and productivity. Complete-
ness is improved because of the accumulating nature of these properties throughout the
design and verification process. Correctness is improved because these properties will be
verified using multiple verification technologies during the full verification cycle (e.g., for-
mal verification, simulation, acceleration and emulation) and at different levels of design
abstraction. Productivity improves both directly, as result of using well-defined procedures
for specifying and verifying properties, and indirectly as a result of improvements in verifi-
cation completeness and correctness. Assertion-based verification CAB V) methodology
describes the tools, techniques, and best-in-class practices that allow the benefits of this
approach to be realized.
The enabling technology for assertion-based verification is a powerful mechanism for
specifying design properties. SystemVerilog provides the sequence and property constructs
to describe design properties in a concise and expressive manner. These constructs were
described in chapter 15. This chapter describes the approach used for identifying and orga-
nizing assertions, and deploying assertion-based verification using System Veri log.
Assertion Definition Flow 413
A verification engineer's focus is more on features extracted from the design specifica-
tion and less on how these features are implemented. Not all design properties are suitable
for assertion-based verification. Data oriented scenarios are best verified using scoreboard-
ing techniques whereas control oriented properties are better candidates for assertion-based
verification. A verification engineer should decide on the design properties that are best ver-
ified using assertions. Block level verification engineers can identify assumptions and prop-
erties relevant to the local blocks they work on, and system level verification engineers can
focus on end-to-end scenarios and system-wide properties. In addition to local and
end-to-end verification scenarios that can be verified through assertions, verification engi-
neers must also define assertions about the following types of information (section 16.1.2.3
and 16.1.2.4):
• Assumptions made about the verification environment
• Properties defining correct DUV configuration
• Properties defining valid combinations of configuration and control registers, status
registers, and DUV pin values
• Properties for standard DUV interfaces
sidered for evaluation when the block under consideration is being verified as part of a larger
module or chip using a simulation environment.
The natural order of execution for block related verification is as follows:
• Bring-up checks
• Reset stuck
• Clock not active
• Interface verification
• Operational mode verification
• Control registers
• Mode pins
• Internal functionality verification
• Operational modes are considered assumptions during formal verification.
• Assertions are specified for each operational mode.
• End-to-end functionality verification
Bring-up checks cover design conditions that must be satisfied before the block can
operate correctly. Examples include reset lines and clock connections.
Interface verification is considered next. The block can operate only if it can communi-
cate with its enclosing environment. Depending on the interface complexity, it may not be
possible to specify all interface properties as assertions. And not all assertions specified at
block level may be verifiable by the formal verification tool. All such interface properties
must be marked for consideration in the simulation environment.
A block can usually operate in different modes through control register settings or mode
pins. The formal verification tool's ability to verify assertions improves with an increasing
number of assumptions and decreasing assertion complexity. Using block operation modes
as assumptions to the fonnal verification tool allows more assertions to be covered by the
formal verification tool. In this approach, assertions for block properties are specified sepa-
rately for each operating mode.
Internal functionality is considered next. Internal functionality is specified for each set
of operating and interface modes. End-to-end behavior of a block may not be a good candi-
date for assertion-based verification because such properties are usually data-path oriented
behaviors. As such, such properties are best verified in the simulation environment. How-
ever, end-to-end properties that can be specified using assertions should be considered in this
stage of verification.
~:I~BIOCk
A
B
C
Assumptions and assertions take alternating roles when verifying each one of a set of
interacting blocks. Figure 16.3 shows two interacting blocks Bl and 8 2, When verifying
block B1, properties defined for the output port Of8 1 are verified while considering properties
defined on its input port as assumptions. The situation is reversed when verifying block 8 2
where properties on block 8 2 's output are now verified with properties on its input consid-
ered as assumptions.
Differentiating between assumptions and assertions is not necessary when using a simu-
lation tool. The reason is that a simulation tool verifies an assertion by computing the values
of all signals used in that assertion, and these computed values already reflect any properties
or assumptions that must be satisfied anywhere else in the design.
SystemVeriiog Assertions 417
DUV
Immediate assertions are also useful for checking issues that may arise during program
execution. The following example shows how an immediate assertion is used to check that
the randomizeO function produces valid results:
Iprogram
, 16.2: Checking program behavior using immediate assertions
tions defined with the assert directive are checked and any failed evaluations are reported.
The syntax is:
[ label I assert property (property-expression) [ action_block I;
The assume directive is used to specify an assumption about the operating environment
of the design. Formal verification tools do not check properties defined using the assume
directive. Rather, such properties are used for verifying the properties that must be solved by
the formal verification engine. Simulation tools, however, treat assertions defined using the
assume directive the same as those defined with the assert directive. Note that no
action_block is allowed for this directive. Syntax for this directive is:
[ label] assume property (property_expression);
The cover directive is used to include the results of assertion evaluation in coverage
results. A pass statement can be specified for this directive, which will be evaluated when the
property succeeds.
[label] cover property (property-expression) [pass_statement];
The following code fragment shows examples of these concurrent assertion types:
1 property assert_prop;
2 @(posedge (clk» (a ##1 b);
3 end property
4 assert property (assert_prop) else -> error_detected;
5
6 property assume_prop;
7 @(posedge (d» (a ##1 b);
8 end property
9 p3Jabel: assume property (assume_prop);
10
11 property cover_prop;
12 @(posedge clk) a 1=> b ##2 c;
13 end property
14 cover property (cover_prop) $display("Covering C1");
L _ _ _ __
1 property prop1;
2 q1 1= d1;
3 endproperty
4 always @(posedge clk) begin
5 q1 <=d1;
6 prop1_proc_asrt: assert property (prop1);
7 end
8
9 property prop2;
10 @(posedge elk)(q2 != d2);
11 endproperty
12 always @(posedgeclk) begin
13 if (a) begin
14 q2 <= d2;
15 prop2_proc_asrt: assert property (prop2);
16 '. end
17 end
In this example, assertions defined on lines 6 and 15 are procedural assertions. The
assertion on line 6 infers its clocking event from the always block, and the assertion on line
15 has its own clocking event defined, which is the same as the inferred clock for the always
block but is placed inside a conditional block. The following code fragment shows how these
same assertions can be specified using concurrent assertions:
1 property prop1 c;
2 @(posedge clk) q1 != d1;
3 end property
4 prop_concur_asrt: assert property (prop1 c);
5
6 always @(posedge elk) begin
7 q1 <= d1;
8 end
9
10
11 property prop2c;
12 @(posedge elk) a 1-> (q2 != d2);
13 end property
14 prop2_concur_asrt: assert property (prop2c);
15
16 always @(posedge elk) begin
17 if (a) begin
18 q2<=d2;
19 end
20 end
SystemVerilog Assertions 421
The definition for property prop1 is modified to include an explicit clocking event
before it can be used in a concurrent assertion. The definition for prop2 is modified so that
the condition for reaching procedural assertion is used as the antecedent of an implication
property whose consequent is the original property definition.
Each assertion control task can be called either with no arguments, with the level argu-
ment. or with level and Iist_oCmodules_or_assertions. If no arguments are specified, then the
task affects all assertions in the hierarchy. level specifies the number of hierarchy levels
below each specified module instance that are affected. The number of levels is counted from
the top of module instance and not from where the task is located. level cannot be set to zero
422 Assertion-Based Verification (ABV)
when any assertions are specified as arguments. If level is set to 0, then all assertions in the
specified module instances will be affected.
IIsCoCmodules_or_assertions provides the scope to which the command applies. The
arguments in this list can be either modules or assertions within modules. Arguments in this
list can be instance names, hierarchical instance names, or hierarchical references to proper-
ties. These arguments cannot be a reference to a sequence or out of module reference to indi-
vidual assertions.
Task $assertoffO suspends checking of all specified assertions until $assertonO is
encountered. Assertions that are already executing, including assertion action blocks, will
continue executing. Task $asserton, resumes checking all specified assertions that were dis-
abled by a previous call to $asserto.ff(). Task $assertkillO halts checking of all specified
assertions that are currently executing, then suspends checking of all specified assertions
until $assertonO is encountered.
-------------------------------------
Capturing Assertion Requirements 423
Not all of these behaviors may be necessary for all verification projects. As such, it is
necessary to define the scope of a verification project before considering what assertions
must be specified.
16.4.2.1 Interface
DUV interface refers not only to its boundary pins but also to any interaction between the
DUV and its outside environment. Correct operation of this interface is the initial, and a
required, step in verifying a DUV, since without an operational interface, the DUV cannot be
verified. Interface requirements are also used as assumptions in fonnal verification tools.
DUV interface consists of the fo Hawing:
• Configuration
• DUV input pins
• Protocol
• Register interface
424 Assertion-Based Verification (ABV)
16.4.2.1.1 Configuration
Design and verification IPs are commonly used to improve productivity through reuse. The
widespread use of such IPs and IP developers' desire to make their product as commonly
useful as possible means that each IP is usually designed to be adaptable to multiple use
models. Different approaches for design configuration during different stages of the design
process include:
• Source code generation: creating different files for different configurations
• Source code compilation: using conditional compilation (e.g., using ifdefin Verilog)
• Design elaboration: using generating parameters and "generate" statements
• Design operation: assigning values to configuration inputs and registers
The following assertions must be specified for checking design configuration:
• Combined setting of all configuration parameters represents a valid configuration
mode
• If allowed, dynamic transitions from one configuration to another take place cor-
rectly
• DUV is configured as planned in its operating environment
The first two types of assertions are added by block designers. The third type is added
when that block is integrated into a larger block.
16.4.2.1.3 Protocol
Any design of reasonable complexity uses a predefined protocol to communicate with its
environment. A protocol may be a bus type interface where the design can act either as bus
master, a bus slave, or both, or a point-to-point connection where communication takes place
using a predefined syntax over multiple clock cycles (e.g., UART, USB).
The signal activity at a block interface depends on the detailed description of its proto-
col and the activity originating from inside the block and from outside the environment.
When defining assertions for a block interface protocol, decisions made by the outside envi-
ronment are not known. This means that interface behaviors that depend on the outside activ-
ity cannot be verified by using assertions when the operating environment of that block is not
available. For example, in a UART interface, it is straightforward to verify that all stop-bits
appearing on the interface are valid, since the size of UART data word is available as a set-
ting in the internal configuration of the block that uses the UART interface. Some UART
behaviors, however, cannot be verified by simply looking at the interface signals. For exam-
ple, if the environment driving the UART input to a block asserts the break condition (forc-
ing the data value to a constant 0), it is not possible to decide by just looking at the interface
signals and block internal setting whether or not the constant zero value on the input line is
due to an environment error or due to a UART break condition.
Figure 16.4 shows a pictorial view of this distinction between protocol internal and
external properties. Assertions defined for interface protocols of a block should include at
least all interface properties that can be checked by looking at the interface signals and inter-
nal state of the design (Le., protocol internal properties). These assertions must be added by
the block designer. Designers can potentially identify external conditions that are needed to
decide all protocol properties and ask for such conditions to be elaborated when the design is
integrated into the larger design. Alternatively, protocol checkers can assume a worst case
scenario policy where any unexpected observation is assumed to be an error unless overrid-
den when the design is integrated in the target design.
This assertion library component checks that a given data value is within the range pro-
vided by the parameters to this component. Lines 4--6 declare the parameters. Lines 14-21
include the auxiliary code needed to compute variable values_checked that is used to collect
coverage values. Lines 24-26 check that the data value is within the expected range. Lines
28-33 collect coverage on when data value is equal to the maximum or the minimum
allowed value. Lines 35-43 define and instantiate a covergroup to collect detailed coverage
information on the value of the data input.
Assertion library components often include ifdef statements so that its behavior (e.g.,
whether or not to collect coverage) can be controlled by the user. These conditional parame-
ters are, however, not shown in this example.
Line 5 generates four clk pulses. Line 6 increments the value of variable data by 3 on
each rising edge of clk. Line 7 instantiates the checkJange assertion library component,
requiring that the value of data remain between 0 and 10, with data having 16 bits. In this
example, the assertion will fail once the value of data is set to 12.
is not a trivial task, but if one is available, it provides significant benefits in carrying out the
verification task. As such, assertion-based protocol checkers are best suited for popular and
commonly used protocols such as PeI-Express, AHB, AXI, etc.
Assertion-based protocol checkers provide the following benefits:
• Complete verification of design adherence to protocol standards
• Complete, pre-verified set of assertions
• Cover checks for capturing interesting protocol scenarios
• Parameterizable data and address bus width
• Ease of use across formal, simulation, and acceleration environments
.~
Block 81 III Block 82
---
P
I I,
,,
II I
I
i
1
!
.- Slave
PART?
Coverage Modeling
and Measurement
434
CHAPTER 17 Coverage Collection
Engine
• User interface
• Information, beyond those specified in the System Veri log program, that can be
extracted from a coverage database (e.g., additional crosses, transitions. etc.)
The following steps are followed during coverage collection and analysis:
• Create a coverage plan derived from the verification plan
• Identify information that must be collected in order to enable the coverage analysis
tool to extract the information required by the coverage plan
• Add coverage collection code to the verification environment so that information
identified in the previous steps are collected during the simulation nmtime
• After the completion of simulation, use a coverage analysis and reporting tool to
report coverage results needed by the coverage plan
This chapter presents language constructs provided by System Veri log for implementing
coverage collection. Steps for building a coverage plan and identifying the information that
must be collected are discussed in chapter 18. Details of using coverage analysis and report-
ing engines is beyond the scope of this book and are described extensively in vendor specific
operation manuals for these tools.
• For some integral data types (e.g., 32-bit integer), it is not feasible to keep track ofthe
number of times each value was sampled.
• In the majority of cases, treating each possible value of a variable separately doesn't
provide any immediately useful information.
Consider a 32-bit address line in an address-mapped system bus where each peripheral
responds to a specific address range. Ideally, full coverage of the address decoding logic
requires that all possible address values are observed during the simulation runtime. Given
this view, full coverage is defined as each address value having occurred at least once. In
reality, however, achieving this ideal target is not possible because it is not practical to keep
count of all observed address values, and more importantly, it is infeasible to generate all
possible values of the address line within a reasonable time. These limitations make it
impractical to define full coverage as each address line having been observed at least once.
SystemVerilog uses the concepts of coverage bins and coverage hits to facilitate the
practical definition of what full coverage means for coverage group elements. In this
approach, possible values for coverage elements (i.e., point, transition, cross) are grouped
into bins and a required hit target is attached to each bin. Full coverage for a bin is defined as
the number of sampled values falling within the range for that bin exceeding that bin's target
hit value. Full coverage for a coverage point or cross is then defined as all of its bins being
fully covered. This approach is further extended through weights and goals to derive a quan-
titative measure of coverage (see section 18.3).
The concepts of bins and hits can be used to define a practical definition for full cover-
age of the system-bus example outlined above. In this approach, a coverage point is defined
as the value of the address line, coverage bins are defined to correspond to the address range
for each peripheral. The hit requirement for each bin can be set either to 1 (if observing at
least one address for each peripheral is judged to be a good test of correct operation of its
decoding logic) or a number proportional to the size of the address space for that peripheral if
more confidence is required. The actual decision of what weight value should be attached to
each bin is subjective and depends on the judgement of the verification team.
Sampling events for coverage group elements can be defined using a number of differ-
ent approaches. But not all values sampled for coverage elements in a coverage group are
necessarily valid at every sampling event. For example, an address value falling in the range
for a peripheral may not contribute to the overall coverage if that peripheral is disabled when
that sample is observed. As such, SystemVerilog provides mechanisms for specifying when a
sampled value for coverage elements and coverage bins should be included in the overall
coverage calculations.
SystemVerilog provides additional features for controlling the definition, instantiation,
initialization, and activation of coverage groups. Details of these facilities are described in
the following sections.
438 Coverage Collection Engine
----------------------- ------------
~_? 2 Covera~e Groups ____._._______ .__ ._. . . . . . . ___ . _.. _. _. . ___._._ . __ .___ ._ . ._.
In SystemVerilog, the covergroup construct is used to define a coverage group containing a
set of interrelated coverage elements. All coverage elements included in a coverage group
must share the same sampling event, even though the actual sampling of individual coverage
elements may be disabled at a given sampling event.
Coverage groups are identified by the following properties:
• Covergroup name
• Coverage sampling event
• Fonnal arguments for customizing each instance of the covergroup
• Coverage elements
• Includes coverage points and coverage crosses
• Bin definitions for each coverage element
• Transition coverage specified as bins for coverage points
• Covergroup customizations using covergroup options
SystemVerilog treats a coverage group definition as a user defined data type. This
means that a coverage group object does not exist unless it is explicitly instantiated (except
for coverage groups defined in classes, section 17.5). Multiple instances of a coverage group
can be created by using the name of a covergroup definition. The following program shows a
simple example of a coverage group definition and instantiation:
This program shows the definition for a coverage group e9_ab that tracks values
assigned to signals si9_a and sl9_b during simulation runtime. The default clocking event for
e9_ab is defined to be the positive edge of elk. Coverage group e9_ab contains two coverage
points ep_a and ep_b which collect infonnation for signals sl9_a and sl9_b respectively. Cov-
erage points ep_a and ep_b are sampled only when reset is set to 0 and 1 respectively. In this
configuration, coverage group e9_ab is activated at every positive edge of signal elk but the
value of sl9_a is sampled only if reset is set to 0, and the value of si9_b is sampled only if
reset is set to 1. An instance of c9_ab is declared and created by calling its new() method in
module top (line 9).
Coverage group sampling events and formal arguments are described in the following
subsections. Details of coverage elements are described in the sections 17.3 and 17.4. Cover-
age options are described in section 17.6.
Coverage Points and Transitions 439
In this example, the predefined stopO method of covergroup construct is used to stop
the automatic activation of coverage group eg_aaO at time 10 (line 11). After this time, eg_aaO
is not automatically activated when its sampling event (i.e., (posedge elk)) occurs. Coverage
group eg_aaO is explicitly activated using the predefined sampleO method (lines 13, 15).
Automatic activation of eg_aaO is restarted again by calling its predefined startO method
(line 16).
A coverag-e point is the fundamental unit of coverage collection as it defines the connection
between the simulation environment and the coverage model. This means that all values
brought into the coverage collection engine are sampled through a coverage point. These
sampled values are then used to collect point, transition, and cross coverage information.
440 Coverage Collection Engine
In this example, three coverage points are defined in coverage group cg_length (lines
6-8). A label can optionally be specified for a coverage point. For example, label cp_len is
specified for the coverage point defined on line 6. If no label is specified and the coverage
point source is a single variable, then the name of that variable is used as the label for that
coverage point. For example, the label for coverage point defined on line 7 is length1 and is
taken from the variable that it samples. If no label is assigned to a coverage point and the
coverage point source is an expression, then an automatically generated and tool specific
label is specified implicitly for that coverage point (line 8). The label specified for a cover-
age point is used to call the predefined methods of that coverage point. It is also needed when
specifying a cross coverage element that includes that coverage point.
A coverage point source is defined as the variable or expression whose result is sam-
pled for that coverage point. A coverage point source can be a single integral variable or the
result of an expression that produces an integral value. In the above example, coverage point
cp_len (line 6) samples variable length, coverage point length 1 (line 7) samples variable
length lo and the coverage point defined on line 8 samples the result of expression
(length+length 1). A coverage point domain is defined as the set of all possible values for its
source. For example, if the coverage point source is an enumerated type, then its domain is
the set of all literals defined explicitly for that enumerated type. If the coverage point source
is an M-bit integer, then its domain is the set of 2M values that can be represented by that inte-
ger.
Each coverage point may include a coverage point guard expression. Upon occurrence
of the sampling event for the containing coverage group, a value for the coverage point is
sampled only if the sampling guard expression evaluates to true. In the above example, the
cO\'erage points defined on lines 6-8 are sampled if variable reset is set to zero (i.e .. reset is
Coverage Points and Transitions 441
inactive). Different sampling guard expressions may be defined for coverage points in the
same coverage group. As such, the sampling time of each coverage point can independently
be customized to conditions that make it relevant.
Values sampled for a coverage point are organized into coverage bins. Two types of bins
can be defined for a coverage point: a value bin or a transition bin. A coverage point value
bin identifies a range of values for the coverage point while a coverage point transition bin
defines a set of transitions for the value of a coverage point. Value and transition bins are
described in section 17.3.2.
In summary, the number of times and the scheduling region where a coverage group is
activated can be controlled by using the strobe option. When the strobe option is not used,
the coverage group is activated once for each triggering of its event expression even if this
happens multiple times in a time-slot. When the strobe option is used, the coverage group is
activated only once at the end of the time-slot in which its event expression was triggered at
least once.
In SystemVerilog, values sampled for a coverage point are tightly connected with the
activation time of the containing coverage group. Essentially, sampled values are the value of
the coverage point source at the time of activation. More fine-grained control of this behav-
ior may, however, be needed, depending on the specific coverage collection requirements.
Values sampled for coverage collection may need to be taken from any of the following
regions of the current time-slot:
• Stable values before entering the current time-slot
• Values in the observed region of the current time-slot, resulting in stable values of
sampled variables in the current time-slot before any property pass or fail code is exe-
cuted
• Stable values at the end of the current time-slot
Using the strobe option results in a coverage group to be activated in the postponed
region of the cycle in which its clocking event is activated. Given the default behavior of
coverage collection in which variables are sampled at the time of activation, this behavior
results in the sampled values to be taken from the postponed region of the current time-slot,
yielding the final stable values of the sampled variable in the current time-slot.
Sampling stable values from preponed or the observed regions can be accomplished by
using a clocking block. An example of this approach is shown in the following program:
'Program 17.4: Using clocking blocks to control sampling time of coverage points
1 module top;
2 bit reset, clk = 0;
3 bit [1 0:01 length, length1;
4
5 clocking cb @(posedge elk);
6 input #1step preponedJength = length;
7 input #0 observedJength :: length;
8 end clocking
9
10 covergroup cg_length @(posedge elk);
11 cp_pl: coverpoint cb.preponed_length iff (!reset);
12 cp_ol: coverpoint cb.observed_length iff (!reset);
13 endgroup
14
15 cg_length cgJengthO = new;
16 . end module
~
In this program, reading the value of variable eb.preponedJength returns the value of
variable length in the preponed region of the last time-slot in which clocking event (posedge
elk) was triggered. Similarly, reading the value of variable eb.observed_length returns the
\ulue of variable length in the observed region of the last time-slot in which clocking event
(posedge elk) was triggered. Collecting coverage on these variables (lines 11, 12) leads to the
collected coverage to correspond to the values of variable length in the preponed and
0bserwd regions respectively.
Coverage Points and Transitions 443
This program shows the definition of coverage group cg_length which contains cover-
age point definitions CP1 and CP2' The following aspects of coverage point bin definition are
highlighted in this example:
• Multiple value bins may be defined for each coverage point (lines 7-12).
• Bin domain specification (Le., the syntactical representation of bin values) can be
given using a mix of single values (e.g., 34, 38 on line 7) and/or ranges of values (e.g.,
[1:10],[8:12] on line 7).
• A bin domain specification may include duplicated values or overlapping ranges of
values, leading to the same value being specified multiple times. All such multiply
included values are assumed to have been specified only once. For example, the over-
lapping ranges [1:10],[8:12]on line 7 can be written equivalently as [1:12]. Also, note
that for bin bb 1 (line 7), value 34 is assumed to have been specified only once even
though it is included in its bin member specification twice.
• A sized bin array notation can be used to create a fixed number of bins for a range of
values (line 8, 9). If the number of bins (NB) divides the number of values in bin
domain specification (NV) evenly (i.e., NV%NB=O), then each bin will have (NV/NB)
values in its domain. For example, bin definition on line 8 creates two bins (bb2 [0]
and bb2[1]), whose members are defined by ranges [111:115] and [116:120] (having sizes
5 and 5) respectively. If the number of bins does not divide the number of values
evenly (i.e., NV%NB>O), then the remaining values are included in the last bin, which
will then contain (NB/NB + NV%NB) domain values. For example, the bin definition on
line 9 creates three bins bb 3[0], bb 3[1], and bb 3[21, whose members are defined by
ranges [111:113], [114,116], and [117,120] (having sizes 3,3, and 4), respectively. IfNB is
larger than NV, then one bin is created for each value.
• An unsized bin array notation can be used to create a bin array whose number of bins
is derived from the number of values in the bin domain specification. In this case, one
bin is created for each value included in the bin domain specification, with each bin
having a domain size of 1. The square bracket notation (i.e., "[ J") is used for creating
unsized bin arrays. For example, bin definition on line 10 creates 100 bins
(bb 4 [O], ... ,bb4[99]). each having a single member taken consecutively from the range
[121 :220].
• A bin definition may optionally include a bin guard expression. The count attribute of
a bin (or any of the bins in its bin array) is incremented only if the condition in its bin
guard expression is satisfied.
• Bin domains for bins of the same coverage point may overlap. In this case, the count
attribute of all bins that include the overlapping value are incremented if the over-
lapped value is sampled during coverage collection.
• The default keyword refers to all values in the domain of a coverage point that are
not included in the domain of any of the bins specified for that coverage point. For
example, the definition on line 11 defines a single bin bbs having as its members all
values not already included in any of the other bin definitions for coverage point CP1'
The definition on line 12 detines a bin array bb s having one bin for every value not
already included in any of the other bin definitions for coverage point CP1'
• Keyword "S" can be used to refer to the beginning or end value of a coverage point
Coverage Points and Transitions 445
domain. As such, the full domain of a coverage point can be represented by range
[$;$]. This notation is used in the above example to define bins describing a beginning
range, an end range, and the full domain of variable length (lines 15-17).
System Verilog allows bins to be marked as illegal or irrelevant to coverage collection.
Illegal bins correspond to values in the coverage point domain that should not occur during
the simulation process. Ignored bins correspond to values that do not contribute to coverage
collection and should be ignored for coverage collection purposes. The following program
shows examples of illegal and ignore bin definitions:
In this example, bins nogood_bins and nocare_bins are defined as illegal and ignored
bins, respectively. A runtime error message is generated if a member of an illegal bin is sam-
pled during the simulation runtime, even if that value is a member of another bin for the
same coverage point. A value specified as a member of an ignored bin, is ignored in every
bin having that value as a member. Also, keyword default cannot be used to define the con-
tents of an ignored bin.
Bins are automatically created for any coverage point that does not have any explicitly
defined bins. In this case, an implicit sized bin array is created for the coverage point. The
number of bins in this bin array is taken from the coverage group option auto_bin.:..max. Bin
domain specification for this implicit bin definition is assumed to be the full domain of the
coverage point (i.e., all possible values for the coverage point source). Given the bin array
size and number of possible values for the coverage point, domain for each bin is decided
according to the creation rule for sized bin arrays described earlier in this section. The auto-
matic creation of bins is shown in the following example:
16 endgroup
17 end module
In this example, coverage point CP4 has only illegal and ignore bins and no explicitly
defined bins. In this case, an implicit bin creation statement (line 8) is used for creating the
bins for CP4' The number of bins is taken from option auto_bin_max, and the full domain of
the coverage point, excluding the values indicated in the ignore and illegal bins, is divided
between these bins forming the domain for each bin. The full domain of coverage point CP4 is
[0:1023] given that length is a 10-bit value. For an enumerated data type, the full domain of
coverage point consists of the set of all literals for that enumerated type.
It is possible to modify the value for option auto_bin_max for each coverage point.
This syntax is shown for coverage point CPs where the number of bins is set to 3.
The values of a transition set or a transition sequence set at each cycle can be given
using the same notation used for bin domain specification for value bins (i.e., using single
value, a range value, using formal arguments, etc.).
SystemVerilog provides repeat operators for specifying transition sequence sets. These
constructs include:
• Consecutive repeat operator
• Goto repeat operator
• Non-consecutive repeat operator
The beha\ior described by each operator is similar to those defined for sequence repeat
operators (section 15.3.3). The following examples show the use of these repeat operators.
Coverage Points and Transitions 447
Abbreviation ATS (Any Transition Sequence) represents a transition sequence of any num-
ber of cycles and any values for each cycle.
Consecutive Repeat: (12 [*3]) represents transition sequence:
(12 => 12 => 12)
Goto Repeat: (12 [-> 3]) Represents transition sequence:
(ATS => 12 => ATS => 12 => ATS => 12)
Non-Consecutive Repeat: (12 [= 3]) Represents transition sequence:
(ATS => 12 => ATS => 12 => ATS => 12 => ATS)
A range of repeat values can be given for a repeat operator. Examples of these operators
are shown below:
Consecutive Range Repeat: 12 [*2:3]) represents transition sequences:
(12 => 12), (12 => 12 => 12)
Goto Range Repeat: (12 [-> 2:3]) Represents transition sequences:
(ATS => 12 => ATS => 12),
(ATS => 12 => ATS => 12 => ATS => 12)
Non-Consecutive Range Repeat: (12 [= 2:3]) Represents transition sequences:
(ATS => 12 => ATS => 12 => ATS),
(ATS => 12 => ATS => 12 => ATS => 12 => ATS)
This program shows the definition of coverage group c9_length containing coverage
point cp_length. The following aspects of transition bin definition are highlighted in this
example:
• MUltiple transition bins may be defined for a coverage point (lines 7-14).
• Transition bin domain specification (i.e., the syntactical representation of bin transi-
tion sequences) can be given using a mix of single transitions (e.g., 10=>20 on line 7)
and/or sets of transitions (e.g., 11,12=>21,22 on line 7).
• An unsized bin array notation can be used to create a bin array whose number of bins
is derived from the number of transition sequences in the transition bin domain spec-
ification. In this case, one bin is created for each transition sequence included in the
bin domain specification, with each bin having a size of 1. The square bracket nota-
tion (i.e., "[ ]") is used for creating unsized bin arrays. For example, the bin definition
on line 9 creates four bins (tb 3[0] •... ,tb3[3]), each having a single transition domain
taken respectively from transition sequences (12=> 12=> 30). (12=> 12=> 40), (12=> 12=>
12=> 30). and (12=> 12=> 12=> 40).
• A transition bin definition may optionally include a bin guard expression. The count
attribute ofa bin (or any of the bins in its bin array) are incremented only if the con-
dition in its guard expression is satisfied.
• Domains for transition bins specified for the same coverage point may overlap. In
this case, the count attribute of all bins that include the overlapped transition
sequence are incremented if the overlapped transition sequence is observed during
coverage collection.
• The default sequence keyword refers to all transition sequences in the domain of a
coverage point that are not included in a domain of any of the transition bins speci-
fied for that coverage point. The bin definition on line 14 defines a single bin tb a hav-
ing as its domain all transitions not already included in any of the other bin domains
for coverage point CP1. It is illegal to use this keyword with an unsized bin array defi-
nition (line 15).
• Unsized bin arrays cannot be used for unbounded length transition sequences (lines
16, 17).
• Illegal and ignore bins can also be specified for transition bins (lines 12, 13).
Transition and value bins described in this section provide a mechanism for collecting
coverage on the value of a coverage point in one or across multiple sampling cycles. Cross
coverage is used to collect information about the simultaneous values observed for two or
more coverage points. Cross coverage is described in the next section.
A cross coverage element collects information on simultaneous values (i.e., values at the
same sampling event) of t\\'o or more coverage points. As such. cross coverage is used to col-
lect infonnation about the cOlTelation between t\\·o or more variables or expression results.
In this example, coverage group cg_length contains two coverage points cp_A and cp_B.
Cross coverage element xc_AB is defined to collect coverage information on simultaneous
values sampled for coverage points cp_A and cp_B. A cross coverage element must be
defined in terms of coverage points. However, if a cross coverage is defined in terms of an
integral variable, then an implicit coverage point is created for that variable and the cross
coverage element is defined based on this implicitly defined coverage point. Cross coverage
element xC_AC (line 10) is defined in terms of coverage point cp_A and variable length_C. In
this case, an implicit coverage point is created for variable length_C which in tum is used in
creating the cross coverage element xc_AC.
A cross coverage element may optionally include a cross coverage guard expression
(lines 9, 10). A cross coverage element is evaluated if the expression for its guard expression
evaluates to true. Otherwise, the cross coverage element is ignored.
A cross coverage source is a coverage point used in defining that cross coverage ele-
ment. Sources of a cross coverage element must oe from the same coverage group. It is ille-
gal to define a cross coverage element whose sources are taken from different coverage
groups.
Data collected for cross coverage elements is organized in terms of value bins of its
source coverage points. This means that a cross coverage element does not keep track of its
coverage point values but what value bins in these coverage points had a hit. Cross domain is
the set of all bins in the cross product of bins for each cross element source. For example, if
coverage point cp_a has bins bah ba2, and coverage point cp_b has bins bb 1, bb 2, then the
cross domain for xc_ab (cross of cp_a and cp_b) is given by the set {(ba1,bb1), (ba1,bb2),
(ba2,bb1), (ba2,bb2)}. Coverage point value bins marked either as illegal or ignore are not
included in defining cross domain. A cross bin has a hit if all bins in its definition have a hit.
For example. bin {ba1,bb1} is hit at a sampling event if both bins ba1 and bb 1 have a hit in that
450 Coverage Collection Engine
sampling event. SystemVerilog provides special constructs for defining bins for a cross ele-
ment as a subset of bins in its domain. Cross coverage bins are described in the following
subsection.
Coverage group cg_length_speed defines coverage points cp_len and cp_sp. These cov-
erage points represent coverage information collected for the length of a packet and the
speed with which this packet was received (exact method of calculating this speed is not rel-
evant to this context). Each coverage point includes a number of bin definitions that repre-
sent length and speed qualities that may be of interest for coverage collection purposes.
Cross coverage element xc_length_speed (line 20) is defined in terms of coverage points
CP_len and cg_speed. Given that no bins are explicitly defined for xc_length_speed, the full set
of bins in domain of cross xc_length_speed is implicitly created. The domain for cross ele-
ment xc_length_speed is shown in figure 17.1, where all squares in this diagram correspond to
one cross bin for xc_length_speed. Note that value bins way_too_slow and way_too_short are
not included in this diagram, since they are marked as illegal and ignore, respectively. The
highlighted areas in this diagram shmv the grouping of individual bins into larger bins by
using special syntax introduced later in this section.
In this figure. every row corresponds to one value bin of coverage point cp_len, and
ewry column corresponds to one value bin of coverage point cp_sp. Every square in this grid
cOlTesponds to a cross coverage bin for cross coverage element xc_speed_length.
Cross Coverage Elements 451
cp_sp
S' tn Q)
ii!' S'
10 i C6 re. 0
!I!. ~(1) Iii!'
re.
~
:9 -'"
~
~ .....
-'" -'" 'i= '::;j ..... .....
..... ..... ~ ~ ~ 0 0
.9 ~ 0
0 ~ :..j -'"
..... ~
.9 0 0 0
.9 .9 0 0
0 ~
.9 0
.9
too_short [3:10]
short
[0]
~ regular [1]
~ [2]
long
too-'ong
The grid in figure 17.1 shows the full domain of cross coverage element
xc_speed_length. For a cross element composed of three coverage points, the figure would be
drawn in three dimensions. SystemVerilog allows coverage bins for a cross coverage element
to be defined as a subset of its domain. To that end, SystemVerilog provides special syntax
for defining the desired subset. Specifying a subset for a two dimensional cross coverage ele-
ment is described using the diagram in figure 17.1. Such specification for higher dimension
cross coverage elements follows naturally from this discussion.
The following forms can be used to define any subset of bins in figure 17.1:
• All bins in a row (or a set ofrows)
• All bins in a column (or a set of columns)
• All bins at the intersection of a row and a column (or a set ofrows and columns)
• Any combination of the above forms
SystemVerilog provides the binsoj, conjunction, and disjunction operators to define any
of the subsets outlined above.
The billsoj operator is used to select a subset of value bins for the bins of a coverage
point (i.e .• a number of ro\\'s or columns in figure 17.1). The syntax for this construct is:
binsof(bin_expression) intersect (range_expression)
452 Coverage Collection Engine
Operator" /" can be used to negate the sense of selected bins so that bins specified using
this notation are excluded from the set of selected bins. This operator can be applied to only
a binso.f construct. The bin conjunction overator "&&" is used to define the intersection of
bins selected using the binsof construct (i.e., intersection of rows and columns in figure
17.1). The bin disjunction overator "II" is used to combine bins selected using the binsofand
conjunction operators. Examples of bins selection using this syntax are shown in table 17.1.
Table 17.1: Coverage Point Bin Selection for Cross Coverage Bin Definition
The program below shows the definitions for cross coverage bins shown in figure 17.1.
Any of the bins defined in this program can be marked as ignore or illegal. In case of an
overlap, the ignore and illegal settings take precedence in handling a sampled cross value.
A cowrage collection group can be embedded in a class declaration. This embedding is use-
ful in that the coverage group can freely collect co\'erage on private and protected class
Class-Based Coverage Col/ection 453
members. In addition, each class instance can include a new instance of the coverage group
allowing coverage infonnation to be collected for each instance of a class object.
A coverage group included in a class is treated the same as other composite data objects
that must be explicitly created (e.g., a class object). This behavior results in the following
restrictions in dealing with coverage groups that are embedded inside class declarations:
• The name of a coverage group declared inside a class cannot be used for any other
class member (data object or coverage group).
• A coverage group declaration inside a class is one and the same as declaring a pointer
to that coverage group. As such, the name of that coverage group cannot be used to
declare a new pointer to that coverage group. This is in contrast to using coverage
groups in other blocks where the name of a coverage group declaration must be used
as a data type to instantiate coverage groups of that type.
• A coverage group declared inside a class has to be explicitly allocated. Otherwise, no
coverage group is created and no coverage is collected for that class.
The following program shows an example of coverage groups embedded in a class dec-
laration:
This program highlights the following aspects of embedding coverage groups inside a
class declaration:
• Coverage groups C9_subp and c9_data are embedded inside class packet.
• Classes may contain multiple coverage group declarations.
• Each coverage group declaration is assumed to be an implicit creation of a pointer to
a coverage group object. The actual coverage group must be allocated explicitly. In
this example. cO\"erage groups C9_subp and C9_data are allocated inside the class con-
structor (lines 20. 21 ).
• Names C9_subp and C9_data cannot be used for any other class member.
454 Coverage Collection Engine
• Any variable referenced inside a coverage group declaration must be declared before
the declaration of that coverage group. In the above example, class properties elk, data
and subp are declared before coverage groups that reference these variables.
A class derived from a base class that includes a coverage group inherits that coverage
group as well. It is possible to redefine the definition of that coverage group by using its
name to define a new coverage group in the derived class. In this case, the coverage group in
the parent class can still be accessed using the super keyword. This access is, however,
allowed only for the immediate parent class of a derived class.
17.6 Coverage
-- ... - -
Options-------
---- -----------------_. _. -
"---- - "-_._-_. _.. _.. _--_. --_.. _----- -_._---------- --~--------.---.-------- ..-
A number of coverage options can be specified along with coverage constructs. These
options can be specified at the following syntactical levels:
• Coverage groups
• Coverage points
• Cross coverage elements
In addition, options can be specified for a coverage group type or for each instance of
that coverage group. Also, some options specified at the coverage group level imply defaults
values for the same options at the coverage point or cross element levels, unless overridden
at that level by providing a new value for that option.
Table 17.2 shows the complete list of all options allowed for coverage constructs. Col-
umn 2 gives the range of valid values for each option. Column 4 and 5 indicate ifeach option
is available at the coverage group level as a type or instance option. Column 6 indicates
\\-hether an option specified at the coverage group level is considered a default value for cov-
erage points and cross elements included in that coverage group. Columns 7, 8, 9 and 10
indicate whether each option is available as instance/type option for coverage point and cross
elements, respectively_
17.
-_._-
7 Coverage
-
Methods
-_._-_. .... -_. __.. -
System Veri log provides predefined methods that can be called for coverage constructs, as
well as system tasks and functions used for managing the coverage database.
The following system tasks and functions are provided in System Veri log:
• $set_coverage_db_name (db_name)
• Sload_cOl·erage_db (db_name)
• Sget_coverage ()
Task Sset_coverage_db_"ame sets the name for the file that stores the collected cover-
age results_ Task $load_coverage_db loads coverage infonnation from the name passed as an
Assertion·Based Coverage 455
~ ~
.!! .!!
Q ~
.:l
t:I
.s
1 2 3 4 5 6 7 8 9 10
name string Covergroup name llQ
Weight of each construct for grade cal-
culation. Weight for type and instance
weight int are used for type and instance grading,
Ii!l Ii!l Ii!l Ii!l Ii!l It!
respectively.
goal 0-100 Target coverage defined as a percentage. Ii!l Ii!l Ii!l Ii!l Ii!l Ii!l
comment string Text to include in the coverage report. Ii!l Ii!l Ii!l Ii!l Ii!l Ii!l
A bin must have at least this many hits
at_least int
before it is considered covered.
Ii!l Ii!l Ii!l 0
Maximum number of automatically gen-
auto_bin_max int Ii!l 0 Ii!l
erated bins for coverage points.
Maximum number of automatically gen-
cross_auto_bin_max int
erated bins for cross elements.
Ii!l 0 Ii!l
Number of missing (not covered) cross
product bins that must be saved to the
cross_ num-print_missing int
coverage database and printed in the
~ 0 0
coverage report.
Issue warning ifbins defined for a cover-
detect_overlap bool
age point include overlapped members.
0 0 0
Collect information for each instance of
per_instance bool
the coverage group.
0
Activate the coverage group only once at
strobe 011
the end of the time-slot.
0
argument. Function $get_coverage returns a real number between 0 and 100, giving a mea-
sure of overall coverage collected so far.
Predefined tasks and functions for coverage constructs are shown in table 17.3. In this
table, columns 3, 4, and 5 indicate whether that method is defined at syntactical levels of
coverage groups, coverage points, or cross elements, respectively.
Method Description
(,.')
-
= .s., .,e
1:1.
e U
Il.
.,
I 2 3 4 5
void sampleO Activate a coverage group b!J
real get_coverageO Returns type coverage grade 0 0 0
real get_inst_coverageO Returns instance coverage grade 0 0 0
void seUnst_name(string) Set the name for each coverage group instance 0
void startO Starts collecting coverage information 0 0 0
void stopO Stops collecting coverage information 0 0 0
Returns the cumulative coverage information
real queryO
(for the coverage group type as a whole)
0 0 0
Returns the per-instance coverage information
real inst_queryO
for this instance
0 0 0
The cover directive is used to include the results of evaluation of a sequence or property
in coverage results. A pass statement can be specified for this statement, which is executed
anytime the sequence succeeds or the property evaluates to true.
If the cover variant of assertion statement is used with a property expression, then cov-
erage is collected on the following conditions:
• Number of times property evaluation is attempted
• Number of times the property succeeds
• Number oftimes property succeeds vacuously
• Number of times property fails
The pass statement is called for each success of the property expression.
If the cover variant of assertion statement is used with a sequence expression, then cov-
erage is collected on the following conditions:
• Number of times sequence evaluation is attempted
• Number oftimes sequences match
The pass statement is called each time the sequence matches, but at most, once in each
time-slot. Note that this behavior is different from the one for the assert statement where the
pass statement is called only on the first match of the sequence expression and at most once
in that time-slot. As an example, consider a sequence that when started at time 10, produces
matches at times 11 and 12. With an assert statement on such a sequence, the pass statement
is called only once, at time 11, since sequence evaluation stops because of the implicit use the
first match operator in a evaluating a property. With a cove,. statement on this sequence. the
pass statement is called once at time 11 and once at time 12.
CHAPTER 18 Coverage Planning,
Implementation, and
Analysis
Verification project productivity is not measured by the fact that verification progress is
made, but rather by how fast such progress is made. Coverage collection provides the sense
of direction necessary to guide a coverage-driven verification flow so that coverage progress
is made at the fastest possible rate. Given the direct impact of coverage collection results on
verification productivity, it is important to take the steps necessary for creating a well
designed and executed coverage collection flow.
A coverage collection flow consists of the following iterative steps:
• Coverage design and implementation
• Coverage collection
• Coverage grading
• Coverage analysis
In the coverage design phase, a coverage plan is produced. This coverage plan outlines
the types and quantity of information that must be collected in order to gain confidence in
full execution of the verification plan. System Veri log provides a rich set of coverage collec-
tion constructs leading to choices in how coverage collection can be implemented. As such,
an important part of coverage design phase is the decision of how to map coverage collection
targets into an actual implementation. Coverage grading refers to deriving a quantitative
measure from the set of data sampled during coverage collection. Coverage analysis pro-
vides a road map for using the coverage information collected so far to guide the following
simulation and coverage execution steps.
Chapter 17 provides a detailed description of SystemVerilog constructs for coverage
implementation. This chapter discusses coverage planning and execution phases and the
implementation of this strategy using constructs described in chapter 17. Section 18.1 gives
an example verification plan that is used as the source of examples discussed in this chapter.
Section 18.2 describes how coverage collection targets identified from a verification plan are
used to create a coverage implementation. Coverage grading and analysis are discussed in
sections 18.3 and 18.4, respectively.
458 Coverage Planning, Implementation, and Analysis
Supports Proto_A
Supports Proto_B
....""
~ 0-555
Beacon Request/Reply
S 556-5555
Verify that idle time before each Data RequestlReply protocol A >1000
packet should not exceed the maxi· Data Request/Reply protocol B > 3333
mum time allowed for that kind. Beacon Request/Reply > 5555
The first section describes scenarios involving frames flowing bet\veen source and des·
tination ports. Note that this listing does not include reply type transfers, since it is implicitly
assumed that such transfers are generated in response to request type transfers and the auto-
matic checking and scoreboarding mechanisms in the verification environment \'erify that
the expected behavior for these transfer kinds is followed. Also note that the list of data
460 Coverage Planning, Implementation, and Analysis
request frames reflects the assumption that not all port agents support both protocols proto_A
and proto_B (See figure 18.1 for protocols supported by each port agent).
The second section lists scenarios that are required for checking idle time behavior at
each receive port of the XBar design. This section highlights the requirement that each group
of packet kinds has a different maximum allowed idle time. Note also that the ranges of
allowed delay values for each group is defined differently.
The third section lists the set of legal packet kind sequences that can be observed at any
of the XBar receive ports. The list of legal transitions is derived from the set of guidelines
described in the XBar communication protocol (section 14.1). All transitions starting with an
SGf packet (e.g., SOF packet followed by another SOF packet) and not listed in the table are
considered illegal and should not occur during the simulation process.
The fourth section lists "interesting" packet kind transitions at the transmit port of the
XBar design. Note that at XBar transmit ports, the number of possible packet kind sequences
is more than can efficiently be enumerated (unlike those at the XBar receive port). The rea-
son is that packets may arrive from any of the other ports and hence the number of packet
kinds and their possible orderings is a large number (e.g., an SOF packet followed by another
SOF packet is possible at the transmit port of the XBar design, since these SOF packets may
have arrived from different source ports).
The scenario categories shown in this table are used in the following sections to moti-
vate the need for, and illustrate, different coverage implementation approaches.
• Point coverage: How many times an integral variable (e.g., integer, logic, bit, etc.)
held a specific value (e.g., 4), or a value in a specific range (e.g., a number between 2
and 12).
• Transition coverage: What was the sequence of values assigned to a single integral
variable (e.g., how many times the value for an integer variable changed from 3 to 12
to 14, or how many times the value for an integer variable changed from a value in the
range 12-99 to a value in the range 33-123 to a value in the range 144-188).
• Cross coverage: What simultaneous values were assigned to two or more integral
variables (e.g., how many times two integer variables contained values 3 and 5 at the
same time, or values in ranges 2-13 and 33-132 at the same time).
A coverage engine does not really understand or have any knowledge of composite data
objects (e.g., packets) or multi-cycle behaviors (e.g., bus read cycle) that are usually the sub-
ject of coverage collection. This means that before coverage can be collected on any simula-
tion related behavior, that behavior must be made identifiable through one of information
types outlined above. For example, collecting coverage on how many times a system reset
has occurred is straightforward, since it constitutes a point coverage where the number of
positive edges for the reset signal are counted throughout the simulation process. Coverage
collection for abstract behaviors is, however, more involved. Consider a memory read opera-
tion that takes place across multiple simulation cycles. Collecting coverage on the number of
times that a memory read operation is performed for an address in the range Ox1112-0x1122,
requires one or more integral values to be created for storing information about the latest bus
operation type and its memory address. Only then can the required coverage information be
collected by sampling these integral variables.
The following steps must be performed in making simulation behaviors identifiable to
coverage collection:
• Collecting coverage on abstract data types, where each data item travels over multi-
ple simulation cycles (e.g., a data packet).
• Synchronizing the sampling of two asynchronous (i.e., timing independent), yet veri-
fication-dependent behaviors (e.g., injection of a packet of type A at port 1 followed
within 10 ns by injection of packet of type B at port 2).
Abstract data items, referred to in the first case, fall into two categories: I} data items
that are usually a part of the implementation of the verification environment (e.g., an XBar
packet), and 2} data values that are specific only to coverage collection (e.g., time lapse
between two events). In the first case, the verification environment has already been imple-
mented using these abstract data items so building coverage specific glue logic, for these
data items is not required. In the second case, special code must be added to the environment
to make this information available to coverage collection. For example, in the XBar verifica-
tion plan, it is not necessary to build special coverage related glue logic since the monitor
attached to each port already extracts these packets in a form that can readily be used in cov-
erage collection. In collecting coverage on scenarios related to idle times, however, a special
coverage related variable must be added to store the idle time before each variable so that
this idle time value can be sampled at the same time as coverage is collected on a packet.
The synchronization between two time-independent behaviors is a common require-
ment in implementing coverage collection. Consider a verification scenario for the XBar
design 'whose activation requires that two beacon request transfers are initiated to the same
4(;2 Coverage Planning, Implementation, and Analysis
port within 10 ns of each other. Collecting coverage on such conditions requires coverage
glue logic to be added to the environment.
6 class xbar_packet;
7 rand bit [3:0) data;
8 rand bit [3:0] src_addr;
9 rand bit [3:0) descaddr;
10 function packeCkind get_packet_kindO; /I implementation not shown
11 endfunction
12 endclass
13
14 class xmt monitor;
15 event coLcoverage ;
16 rand xbar_packet pkt;
17 int porCnum;
18
19 covergroup cg_xbar_packet @(col_coverage);
20 option.peUnstance 1; =
21 pkind: coverpoint pkt.geCpackeCkind{) iff (Ireset) {
22 bins pkind_binsO = {[Ul}:
23 ignore_bins nocare_bins = {SOF, BEACON_REPLY,
24 DATA_REPLY_A, DATA_REPLY_B}:
25 =
illegal_bins nogood_bins {INVALID}:
26 }
27 endgroup
28
29 function new{int pnum);
30 cg_xbar_packet = new{);
31 =
pkt new{);
32 porCnum = pnum;
33 endfunction
34 end class
35
36 xmt_monitor monO =new{O): /I monitor instance for port 0
37 xmt_monitor mon1 =new(1): /I monitor instance for port 1
38 xmt_monitor mon2 =new(2); /I monitor instance for port 2
39 xmt_monitor mon3 =new(3): /I monitor instance for port 3
40 end module
The above implementation shows a generic description of the XBar packet (lines 6-12).
This implementation includes a function declaration that extracts the packet kind from the
content of the packet (lines 10-11). A simplified class-based implementation of an XBar
receive port monitor is also shown (lines 14-34). For illustration purposes, all monitor
instances are created in this top level example (lines 36-39). In the real environment, each
monitor instance would exist in its appropriate verification component (see section 13.6).
The above implementation defines a coverage point based on the result of the packet
kind returned by function get_pkt_klndO. One bin is created for each packet kind (line 22). An
ignore bin is defined, reflecting the fact that only packets relevant to scenario creation should
be tracked. Note that SOF packet kind is also placed in the ignore bin, since checkers in the
environment guarantee that any packet of type DATA_REQ_A or DATA_REQ_B is preceded by
an SOF packet. An illegal bin is also defined to include the INVALID packet kind. Note that
instance based coverage collection is activated by using the appropriate option (line 20).
SystemVerilog coverage query functions allow coverage information to be collected for
each monitor instance as well as for the coverage type as a whole. Instance coverage infor-
mation for this example gives information on how many of each packet kind was observed at
the port corresponding to that instance. Type coverage information for this example provides
information on how many packets of each kind were observed at all receive ports.
464 Coverage Planning, Implementation, and Analysis
In the next section, this example is extended to include a cross coverage element
between packet type and its destination port. The result provided for this cross element is
more interesting in that instance coverage information for this cross element provides infor-
mation on how many packets of each kind were sent from the port associated to that instance
to each destination port, and type coverage information on this cross element provides infor-
mation of how many of each packet kind was received at each destination port from all
source ports.
Multi-instance coverage implementation can be emulated through a cross coverage def-
inition by introducing a variable corresponding to each location where a coverage group
instance is placed. This approach, however, defeats the main advantage of an instance based
implementation where coverage collection is viewed as tightly connected to structural blocks
in the design and the verification environment.
22
23 endgroup
L ______~_________
The implementation shown above is intended to replace the one shown in program 18.1.
In this implementation, a coverage point corresponding to the destination address of each
observed packet is added to the coverage group (lines 9-11). Appropriate definitions are
used to define illegal bins (line 11), and to also define a bin array for this coverage point (line
10). Cross coverage element kind_cross_dest, defined using destination address and packet
kind is also added to this coverage group (lines 13-21). The following observations hold
about this implementation:
• The instance based coverage results for kind_cross_dest gives information on how
many packets of each kind have been sent from the port corresponding to the given
instance to each destination port. The type coverage result for kind_cross_dest gives
information on how many packets of a given kind are sent from all ports to a given
destination address.
• Bin array breq (line 14) contains four bins, one for each destination port. Each bin
provides a count of how many BEACON_REQ packets have been sent to the destina-
tion address corresponding to that bin.
• Bin arrays dreq_A and dreq_B (lines 15, 16) each contain four bins, one for each desti-
nation port with each bin providing a count of how many DATA_REQ_A and
DATA_REQ_B packets, respectively, have been sent to the destination address corre-
sponding to that bin.
• Bins ofpkind and dest_addr that are already marked as ignore or illegal in their defini-
tion (i.e., pkind.nogood_bins, pkind.nocare_bins, dest_addr.nogood_bins) are automati-
cally excluded from consideration when forming the bins for their cross product. As
such, it is not necessary to explicitly mark kind_cross_dest bins that include these bins
as illegal or ignore.
• Illegal bin kind_cross_dest.nogood_bins is defined to include cases where the combi-
nation of valid values for pkind and dest_addr produce invalid combinations. This
includes cases where a DATA_REQ_A packet is sent to port 1 and a DATA_REQ_B
packet is sent to port O. In this notation (binsof(dest_addr.dbins[OJ)) selects dest_addr
bin corresponding to port 0 and (binsof(pkind.pkind_bins} intersect {DATA_REQ_B})
selects the bin of pkind that corresponds to DATA_REQ_B packet.
Table 18.2 shows a visual representation of the bin space for cross product element
kind_cross_dest. Cross product bins excluded from the set of valid bins are marked in this
table with the line number causing its exclusion. Bins not excluded on rows marked as
BEACON_REQ, DATA_REQ_A, and DATA_REQ_B are form bin arrays breq, dreq_A and dreq_B
respectively. Also note that the cross product space shown in this table is larger than the
domain for cross product kind_cross_dest since it also shows rows corresponding to illegal
and ignored bins of its coverage points, which are not included in the domain of a cross cov-
erage element.
466 Coverage Planning, Implementation, and Analysis
::>
o
co
0- o
0" o
s·
U> I~
s·
U>
Table 18.2: XBar Packet Kind VS. Destination Address Bin Space
Table 18.3 shows a vi~ual repre~entation of the. bin .spa~e for cross coverage element
kind_cross_idle. Illegal and Ignored bills a~e marked. III thiS view. ~ote that each of the bin
arrays beacon, proto_A, and proto_B contaills four bills. correspondlllg to those required by
the verification plan.
468 Coverage Planning, Implementation, and Analysis
0: 0: 0:
,m 1>
0"
CD
L."
a ~a
Q)
0
0
::J
I~ 1°
tll
~ 0" 0"
~ cr 0"
~ 0" 0"
(.0) N
'?
N
v
'" N (.0)
(J1
C1I
U1
(J1 ~
(J1
Table 18.3: XBar Post-Idle Packet Kind vs. Idle Time Bin Space
~-------------
This implementation is similar to the multi-definitional model with the exception that
only one coverage point src_addr is defined for the source address. Coverage point src_addr
is then combined with coverage point pkind to form cross kind_cross_src (lines 14--20). Bin
definition constructs are then used to define illegal combinations of packet and source
address that should be excluded from the set of cross coverage bins.
An alternative approach for building a hierarchical coverage model is to use a cross
coverage element and guard expressions for each bin of the cross coverage element.
~-----------------
Special transition bins trans_bin1 and trans_bin2 are defined to count the occurrence of
transitions defined in the verification plan. In addition, nogood_trans_bin is defined to iden-
tify illegal transitions (line 11) and the remaining transitions are included in bin
remaining_trans_bin (line 12), In this implementation, trans_bin1 bin array contains four bins
470 Coverage Planning, Implementation, and Analysis
and trans_bin2 bin array contains eight bins, corresponding to all possible transitions as
allowed by their definition.
Defining complex transitions using coverage point bin definitions is not straightfor-
ward. In such cases, assertion-based coverage is a good alternative that allows the full power
of sequence definition constructs to be used in collecting coverage on transitions of interest.
The following program segment shows the implementation of coverage collection on transi-
tions observed on the transmit port of the Xbar design (as required by the Xbar verification
plan in table 18.1).
cover property (@(mO.coLcoverage) ( mO.pkt.kind==SOF #:#1 mO.pkt.kind==DATA_REQ_A) ['2]);
Covera~e g-radin~ provides a quantitative measure of overall coverage progress. This quanti-
tative measure is given as a percentage where full coverage corresponds to a grade of 100%.
The overall coverage grade is computed recursively. First a grade is computed for each bin of
cross elements and coverage points. The grade for each element is computed using the grade
for its bins, the grade for each coverage group is computed using the grade for its elements,
and the overall grade is computed using the grade for all coverage groups in the environ-
ment.
The coverage grade for each coverage construct is computed as a weighted sum of the
coverage grade for its sub-elements. As such, it is possible to selectively emphasize specific
coverage elements in the overall coverage grade by changing their weight option. The fol-
lowing equations are used for computing the grade for different coverage constructs. In the
following equations, a coverage element is either a coverage point or a cross coverage ele-
ment.
. _ . (
Grade(bm) - mill 1.0,
Hits(bin)
I (b'
at east m
y
L Grade(bin)
Grade( element) = ",al......,lb""in.,.s:----:-=-:_
numberOfBins
L Weight(element) x Grade(element)
Grade(group) = ",al"..,1c",le""m""en""t,,-s_ _ _ _ _ _ _ _ _ _ _ _ __
L Weight(element)
all elements
Coverage Analysis 471
L Weight(group) x Grade(group)
Grade(global) = ""ai!...llg""ro""u""ps"--_ _ _ _ _ _ _ _ _ _ __
L Weight(group)
all groups
In above equations, only Hit(bin), the number of hits for a bin, is extracted from the envi-
ronment. All other values have default values which can be changed using specific options
provided in SystemVerilog (see section 17.6).
The following steps are used to customize coverage grading:
o Define the minimum number of hits required for each bin using the "aCleast" cover-
age option. A default value for this option can be set at the coverage group level. This
default value will be used for all bins defined for elements of the coverage group. If
necessary, this default value can be overridden for each coverage elements (i.e., cov-
erage points and cross coverage elements).
o Specify a weight for each coverage point, cross coverage element, and coverage
group. Note that different weights can be specified for type and instance coverage.
o Specify a goal for each coverage point, cross coverage element, and coverage group.
Note that different goals can be specified for type and instance coverage.
o During simulation runtime, use the predefined method geCcoverageO to get type
coverage grade for a coverage point, a cross coverage element, or a coverage group.
o During simulation runtime, use the predefined methodget_inst_coverageO to get
instance coverage grade for a coverage point, a cross coverage element, or a coverage
group.
o Alternatively, use a post-simulation coverage analysis tool to load and view the cov-
erage results.
Coverage grades computed for each coverage construct can be used during the simula-
tion runtime to guide the generation process towards the missing scenarios. Additionally,
post-simulation coverage analysis can help improve the environment andlor change coverage
goals to adapt to conditions observed during simulation. These considerations are discussed
in the next section.
• Execution
• Analysis
• Reaction
In the planning phase, coverage model is developed and implemented. In the execution
phase, coverage information is collected. The execution phase consists of running the com-
plete regression suite so that the contribution of all existing testcases to overall coverage
progress is taken into account. The coverage engine handles the task of combining results
from individual runs into a common coverage database. The result of coverage collection is
studied in the analysis phase to decide what action should be taken in the reaction phase. The
flowchart shown in figure 18.2 summarizes the steps taken in each phase.
REACT
The first pass through the execution phase is started after the first version of the cover-
age plan is designed and implemented. The results of the execution phase indicates whether
any illegal bins were hits and/or any coverage holes still exist. Illegal cases should ideally be
detected by checkers and monitors in the environment. As such, if an illegal case is detected
only as pal1 of coverage collection. then either the verification plan should be enhanced to
include this check or the monitors and checkers in the environment should be fixed to detect
this condition as well.
--------- - - - - - - - - - - - - - - - - - - - - - - - - -
Coverage Analysis 473
A coverage hole is a bin that was not covered during the simulation runtime. It is possi-
ble that a coverage hole may not have been relevant to the overall verification progress and
may have been marked as such because of a mistake in coverage design. If this is the case,
then the coverage plan should be updated to either remove the bin that was not covered from
coverage implementation or mark it as an ignored bin.
If a coverage hole is in fact relevant to achieving full coverage, then the random genera-
tion environment and its current constraints must be checked to decide whether continued
simulation using the same setting is likely to create the condition that covers the coverage
hole. No immediate action needs to be taken for such coverage holes.
If a coverage hole is not likely to be covered with continued simulation, then the current
generation constraints must be examined to see whether modifying these random generation
constraints can lead to the generation of the missing scenario. A new testcase should then be
added to the verification environment with the new generation constraints.
If the missing scenario cannot be generated by modifying random generation con-
straints, then the verification environment must be enhanced to facilitate the creation of the
missing scenario. For scenarios that are difficult to generate, it may be necessary to create a
unit testcase.
Verification is declared complete when no illegal bins are hit during coverage collection
and the global coverage grade exceeds the global coverage goal.
The flow shown in figure 18.2 handles one exception (i.e., illegal case, coverage hole)
at a time. In practice, however, a number of simulation runs should be completed before this
analysis is performed. In addition, during one pass through this flow, all or most illegal cases
and coverage holes should be studied, and steps for dealing with them taken before new sim-
ulation runs are started.
474 Coverage Planning, Implementation, and Analysis
475
PART 8
Appendices
476
APPENDIX A Predefined Classes of
the OVMLibrary
ovmyrinter_knobs
#(type T -int)
o\'myu!Jmp #(type T=int. type IMP=int)
#(type T=int)
ovm _random_sequence ovm_sequence
oVIll_random_stimulus #( type trans _ type=ovm_ transaction) ovm_component
ovm_record~r
ovmJeportJlob.l_server
o\'mJeport _handler
virtual o\'ln_report_object
ovm_report_server
#(type REQ-ovl11_sequence_item.
ovm_reLLrsp_driver Dvm_driver
type RSP=ovm_sequencejtem)
#(type REQ=ovm_sequence_item.
Dvm_sequence
type RSP=ovm_sequence_item)
#(type REQ=ovl11_sequence_item,
virtual ovm_scenario ovrn_scenario_base
type RSP=ovm_sequence_item)
#(type REQ-ovm_sequence_item,
ovrn_scenal'io_controller
type RSP=ovm _sequence_item)
Dvm_scenario_controller_base OVm _threaded_component
#(type REQ-ovm_sequence_item.
virtual ovm _scenario_driver
type RSP=ovm_sequence_item)
oVm _threaded_component
\'irtual OVIn _scenario_driver_ noparam ovm_scenario_driver_base
\'irtual ovm_scenario_lloparam Dvm_scenario_base
O\"m _scope_stack
lWIll_seq_colls_if Dvm_component
Dvm_component
ovm _component
=IIYr~ T=intl
0\ 1ll __ table_l'rinter
Underlined keywords are ne\\ to System\erilog and not a part of Veri log
484
I
parameter ref strongO trireg xnor
Underlined keywords are ne\\' to SystemVeri/og and not a part of Veri log
Index
* 83 ~I 83
** 83 $ 380,384
1 83 $assertkillO 422
& 83 $assertoffO 422
&& 83 $assertonO 422
## 121 $bitsO 157, 158
##0 382 $dimensionsO 77
##1 385 $dumpvarsO 421
#0 115 $error 418
#lstep 121 $errorO 421
% 83 $fatalO 421
+ 83 $fellO 377
++ 83 $finishO 421
< 83 $get_coverage 0 454
« 83 $highO 77
«< 83 $incrementO 77
<= 83 $infoO 421
<$noname 100 SleftO 77
= 83 S\oad_coverage_db () 454
== 83 $lowO 77
==? 83 $pastO 377
486
Techniques 17 Agent 48
Function Architecture 47
inout Argument Type 85 Bus Monitor 48
input Argument Type 85 Coverage Collector 54
output Argument Type 85 Driver 48
Prototype 107 Logical View 48
ref Argument Type 86 Monitor 48
Return Value 87 Reuse during System Level
Signature 107 Verification 61
Function Declaration Sequencer 48
ANSI Format 87
Verilog Format 87 J
Function Lifetime 86 Jump Statements 90
automatic 86
static 86 L
Functional Verification 8 local 110,278
Sources of Functional Errors 8 localparam 99
Functions 85 Loop Statements
break 90
G continue 90
Garbage Collection 104 disable 90
generate Statement 296 do-while 90
Gray-Box Verification 10 for 90
Shades of Gray 10 forever 90
jump 90
H repeat 90
Hardware Acceleration 20 return 90
Considering Requirements during Early while 90
Verification Phases 21
Idea behind Acceleration and M
Emulation 20 Mailbox 125, 130
Hardware Emulation 20 Features 130
Hardware-Software Co-Verification 55 getO 130
Hierarchy newO 130
Data Object 69 numO 130
Module 69 Parameterized 130
Procedural 69 peekO 130
putO 130
try_getO 130
if-else Statement 88 tryyeekO 130
priority 88 tryyutO 130
priority vs. unique 88 Method
unique 88 Prototype 107
Increment/Decrement Statements 84 Signature 107
initial Block 101 Metric-Driven Verification 30
Integral Variables 244 Modeling Language
interface Block 92, 95 Limitations of Veri log 66
contents 97 Required Elements 66
port list 98 mod port 99
Virtual Interface 312 module Block
Interface Drivers 25 DefaultClocking 121
Interface Verification Component 46 Implicit Port Connection 94
491
or 397 Randomization
Overlapping Implication 398 Constrained Randomization Problem
Property Specification Statement 238
Checking 18 Early Randomization 196
Hierarchy 372 Effect of Variable Ordering 240
Language 19,371 Late Randomization 196
Required Facilities 372 Non-Ordered 241
Property, Named Variable Ordering 239
Disable Clause 391 Randomization Constraint Operators
Instantiation 391 Distribution Operator 250
public 278 Function Call 255
if-else Constraints 251
Q Implication Operator 250
qt.leue 80 inside Operator 249
delete 81 Iterative Operators 252
insertO 80 Variable Ordering Constraints 254
pop_ backO 81 Randomization Constraints 247
popJrontO 81 Bidirectional 240
push_ backO 81 Constraint Guards 255
pushJrontO 81 Equivalence Constraints 272
sizeO 80 foreach 252
Global Constraints 253
R Hierarchical Constraint Block 274
rand 244,245,254,257,258,278 State Variables 242
randc 244,245,254,257,258,278 Unidirectional 240
implicit ordering 245 randomizeO 244,247,254
Random Generation 26 ref 255
Coverage Collection 28
Finite-State Machine 27 S
Important Considerations 238 Scheduling Regions 114
Measuring Verification Progress 28 Active 115
Need for Automatic Result Checking 28 Inactive 115
Need for Directed Tests 29 NBA 115, 116
Practical Limits 27 Observed 115, 116
Requirements 27 Postponed 115, 117
Uniform Distribution 238 Preponed 115
Verification Completeness 14 Reactive 115, 116
Verification Progress Chart 26 Scope 70
Random Generation Engine Scoreboarding
Modes of Operation 241 Checks 60
Operation 241 Features 60
Quality Metrics 237 Reference Model 60
Random Stability 261 Semaphores 125, 127,294
Object Stability 263 Active Region 127
Thread Stability 261 getO 127
Random Variable newO 127
Disahling 258 Process Synchronization 127
Properties 246 putO 127
Reachable Set 237 try~etO 127
Random Variable Ordering Sequence 372
Assignment-Based Order 241 (For Scenario Generating Sequences, See
Constraint-Based Order 241 Transaction Sequences)
497