Horizons June 08
Horizons June 08
Horizons June 08
Design Verification
Using Questa and the OVM,
A Practical Guide to OVM–Part 1
Dynamic Construction and
Configuration of Testbenches,
and OVM Productivity
A PUBLICATION OF MENTOR GRAPHICS JUNE 2008—VOLUME 4, ISSUE 2 using EZVerify
2
So, getting back to baseball for a moment, I’m now faced with the Verification Horizons
challenge of helping the boys continue their winning ways. Granted, is a publication of Mentor Graphics
this is a much better challenge to have than to convince a losing team Corporation, all rights reserved.
of 8–10 year-olds that they’ll win eventually, but it’s still a challenge. I’m Editor:
hoping that the sweet taste of victory will motivate them to work even Tom Fitzpatrick
harder, but the emphasis will always be on doing their best and having
Program Manager:
fun.
Rebecca Granquist
If you’re going to be at DAC, I’d gently suggest the same advice.
Senior Writer:
Stop by the Mentor booth, and feel free to ask for me if you’d like to
Todd Burkholder
say “hi.” We’d love the opportunity to talk to you about how the Mentor
team can help you experience “the thrill of victory” on your next
project. Wilsonville Worldwide Headquarters
8005 SW Boeckman Rd.
Wilsonville, OR 97070-7777
3
Table of Contents
Page 7...
Achieving DO-254 Design Assurance using
Advanced Verification Methods by Dr. Paul Marriott,
Director of Verification, XtremeEDA Corporation; David Landoll, Mentor Graphics;
Dustin Johnson, Electrical Engineer, Rockwell Collins; Darron May, Product Manager,
Mentor Graphics.
Page 13...
Ending Endless Verification with 0-In Formal
by Harry Foster, Chief Verification Scientist, Mentor Graphics
Page 16...
Intelligent Testbench Automation
Turbo-Charges Simulation
by Mark Olen, Product Manager, Verification, Mentor Graphics
Page 19...
Using Questa Multi-View Verification
Components and OVM for AXI Verification
by Ashish Kumar, Verification Division, Mentor Graphics
Page 23...
What is a Standard?
by Mark Glasser, Mentor Graphics Corporation & Mark Burton, GreenSoCs
Page 26...
Firmware Driven OVM Testbench
by Jim Kenney – SoC Verification Product Manager, Mentor Graphics
4
Partners’ Corner:
Page 29...
Design Verification Using Questa
and the Open Verification Methodology:
A PSI Engineer’s Point of View
by Jean-Sébastien Leroy, Design Verification Engineer &
Eric Louveau, Design Verification Manager, PSI Electronics
Page 35...
A Practical Guide to OVM – Part 1
by John Aynsley, CTO, Doulos
Page 40...
Dynamic Construction and
Configuration of Testbenches
by Mike Baird, Willamette HDL, Inc.
Page 44...
OVM Productivity using EZVerify
by Sashi Obilisetty, VeriEZ
Page 49...
Tribal Knowledge: Requirements-Centric
Verification
by Peet James, Sr. Verification Consultant, Mentor Graphics
5
Mentor and our friends
at XtremeEDA teamed up
to help Rockwell Collins
transition their verification
processes from “traditional”
directed testing to today’s
constrained-random,
coverage-driven verification
provided by Questa® and
the Advanced Verification
Methodology (AVM).
6
Achieving DO-254 Design Assurance using Advanced Verification
Methods by Dr. Paul Marriott, Director of Verification, XtremeEDA Corporation; David Landoll, Mentor Graphics;
Dustin Johnson, Electrical Engineer, Rockwell Collins; Darron May, Product Manager, Mentor Graphics
DO-254 is a standard enforced by the FAA that requires certification XtremeEDA was contracted by Mentor Graphics to train the Rockwell
of avionics suppliers’ designs and design processes to ensure reliability Collins team in the use of the AVM, and design an environment and
of airborne systems. Rockwell Collins had been using a traditional process whereby the linking of the functional coverage data to the
directed-testing approach to achieve DO-254 compliance [1]. This is formal requirements could be automated.
time consuming and, as their designs were becoming more complex,
they wanted to take advantage of the productivity gains a modern
constrained-random coverage-driven environment provides, but still ORIGINAL METHODOLOGY
ensure the needs of the DO-254 process could be met. Historically, Rockwell Collins (RC) utilized a thorough DO- 254
compliant methodology of creating and running a large suite of self-
A methodology built using SystemVerilog with Mentor’s Advanced
checking directed tests, as well as performing manual and automatic
Verification Methodology (AVM) [4] was assembled and a flow developed
analysis of the code (and any other methods as required by a particular
that linked the formal requirements documents through the verification
program). However, although this methodology is thorough and
management tools in Questa. This allowed coverage data to be used
produces a high quality design, some bugs may not be found until late in
to satisfy the DO-254 process, and greatly reduced the verification
the design cycle, or during lab bring-up, which makes identifying, fixing,
effort required. The target design, an FPGA DMA engine, was recently
and re-verifying them more costly.
certified using the flow developed to meet DO-254 Level A compliance,
the highest level that is used in mission-critical systems.
The process flow developed for this project will be of interest to anyone
PROJECT OVERVIEW
who needs high levels of verification confidence with the productivity This project, an FPGA DMA Engine, had increased complexity
gains that methodologies such as the AVM or OVM provide. due to high configurability, potentially exacerbating this situation. RC
undertook the use of newer advanced verification methodologies in
hope of achieving increased robustness, finding bugs earlier in the
INTRODUCTION project cycle, and verifying the overall device more quickly. As a result,
DO-254 is a process which, simply stated, ensures that all RC hoped to shorten the overall verification cycle and achieve better
specified design requirements have been verified in a repeatable and quality results.
demonstrable way. The key aspect
of this is that all the requirements
of the system are well specified,
and each of those requirements
can be demonstrated to have been
verified. Traditionally, a direct-
testing approach is used as a test
can be written for each requirement.
Conceptually, a directed test
essentially encapsulates one,
or possibly several, functional
coverage points [2]. Hence it is
the linking of the functional coverage points to the design requirements
Figure 1 shows the DMA Engine’s initial configuration set-up process,
which is really being used to verify those requirements.
where the uP writes a list of DMA records to the Xfer Memory. These
records describe repeatable transfers to be executed by the DMA
7
Engine once started. The exact set-up of these records is target device PRODUCTIVITY ENABLERS
specific, thus the DMA Engine is highly configurable via these records.
Constrained Random Generation
Once this set-up is complete, the DMA Engine is started and the data
transfers begin (also shown in Figure 1, right-hand side). During regular The term “random” can create confusion in the world of DO-254
data transfers, data moves between the Ethernet chip and the uP’s local compliance as 100% certainty is not usually associated with a random
RAM as well as between the local RAM and the uP. process. However, random in the context of verification means selecting
values from a valid range, which is set by constraints. These constraints
There are several complexities with this design that increase are determined by the design requirements, and the randomization
the verification challenge. First, there are two types of data, namely within these constraints is to allow a more rapid exploration of the set
Samples and Messages. The Messages queue up and must all be of valid stimulus that is necessary to assure design compliance. The
delivered. The Samples are from measurement devices that are constraints can be visualized as a tree-like structure which, if every
infrequently updated compared to the speed of the uP. As such, it branch is explored, encapsulates all possible stimuli that can be applied
makes no sense to read from the sample ports faster than external to the design.
measurements can change, as this would re-read the same data
value over and over. As such, there are sampling clocks that
indicate the correct time to sample these ports.
8
Design Intent Specification There are essentially three steps to achieve requirements linking:
Design Intent Specification refers to the ability to document the
1. Exporting the testplan from DOORS
expected design protocols in a tool readable format. For SystemVerilog,
that format is called SystemVerilog Assertions (SVA). This feature of 2. Linking the UCDB testplan to the coverage items
the SystemVerilog language allows the user to describe design signal
activity over time so that it is constantly being checked during simulation 3. Merging the simulation results with the testplan
[3]. These time-based signal behaviors are also known as temporal
With this methodology there is not necessarily a one-to-one
assertions.
relationship between the requirements and testcases. There is however
While functional coverage captures and checks the design state at a a strict auditable relationship between requirements and their assertions
moment in time, temporal assertions are used to check and cover design or coverage points. This provides the important traceability link that
activity over time. Requirements often describe such design intent, or shows a requirement was covered (i.e. verified).
in other words, how the design should react to stimulus. Requirements
of this type translate well into temporal assertions. Examples of these
types of assertions include target protocol sequences, error recovery, EXAMPLE PROCESS
or sampling of design state.
The following are DOORS screen shots demonstrating the steps
required to meet supported requirements traceability. They illustrate
TRACEABILITY the process of connecting requirements to tests, coverage items, and
assertions.
Simply put, DO-254 compliance means that the verification team
can demonstrate that they did, indeed, thoroughly verify all the design The basic process is that requirements and test cases are entered
requirements and can repeat that effort if required to do so. The key to in and managed within DOORs. DOORs can produce a testplan report
achieving this is traceability – i.e. demonstrating how each requirement that is the basis for the Questa testplan. The verification engineers fill in
was verified. Since coverage is being used to provide the feedback the testplan with the items that verify requirements have been covered.
that the requirements have been verified, an automated process is Verification is run and the coverage data is stored and then merged with
essential to ensure productivity and repeatability and hence to provide the testplan. The final result is that the testplan contains coverage data
an automated traceability path from each documented requirement to that links to the requirements being tested.
the coverage item or assertion that verified it.
9
RESULTS
Preliminary analysis shows that the project
team is happy with the decision to use these
advanced verification methods [5]. They were
able to adopt these new methods, as well as the
SystemVerilog language with no real schedule
penalties. The overall environment was up
and running quickly. The verification team was
ready to write tests before RTL design code
was available, so the verification team wrote
transaction level models for the device and
used these for early testbench development and
testing. Later, after the design was available,
these transaction level models became the
Figure 4 (on the following page) shows the testplan entered into reference assuring design correctness.
DOORS (note: for readability, the testcase description/detail text is
omitted from these screenshots). It highlights a test case, the covergroup Through this process, the project team found bugs that, in their
names that it is linked to, and the type of coverage the link refers to. In words, “would have been tough to catch without randoms.” For example,
this figure, the in-links column shows the text from the object that is a “FIFO full” problem was detected early in the project that required a
linked into that cell. In the case of 3.1.1.2.1, various objects that are complex scenario of reads and writes while the rest of the design was
defined elsewhere in the document are linked into that particular cell. in a specific state.
10
In addition, the team reported that if they
had attempted the traditional approach
of directed tests, they would have faced
writing and maintaining hundreds of
individual tests. This would have become
a difficult revision control management
problem, as each test needs to be
tracked back to individual requirements.
In addition, this would have also become
a challenging regression environment,
with long run-times, difficult job spawning
coordination and cumbersome coverage
results merging.
12
Ending Endless Verification with 0-In Formal
by Harry Foster, Chief Verification Scientist, Mentor Graphics
Dr. Wally Rhines noted during his DVCon 2008 keynote speech that An example of such sequential behavior is an MPEG encoder block
today’s approach to verification is a frustrating, open-loop process that that encodes a stream of video data. Functional formal verification
often does not end—even after the integrated circuit ships. To keep usually faces state explosion for sequential designs because, in general,
pace with Moore’s law, which has enabled escalating product feature most interesting properties involve a majority of the design inputs and
demands, verification efficiencies must increase by at least 100x. state elements within the design block.
Obviously, throwing more engineers and computers at the problem has
Concurrent designs blocks (see Figure 2) deal with multiple input data
not provided a scalable solution. The industry must move away from
streams that potentially collide with each other.
the model that adds more cycles of verification, to a model that adds
more verification per cycles (that is, maximizing the meaningful cycles
per second). Functional formal verification (such as Mentor Graphics’
0-In™ Formal Verification tool), when effectively used, offers significant
improvements in verification productivity. The confusion most engineers
face when considering functional formal verification is in understanding
how to effectively get started.
14
So what is the key take-away from Figure 4? If you are an organization must be error free. These properties warrant applying the appropriate
at maturity level 1 or 2 with no prior experience in formal verification, resources to achieve a full proof.
you will have success with formal with minimal advance skills required
Bug-hunting. Using formal verification is not limited to full proofs.
if you select blocks labeled as level 3 in Figure 4. This choice will allow
In fact, you can effectively use formal verification as a bug-hunting
your organization to develop advanced skills so that in the future you
technique, often uncovering complex corner-cases missed by simulation.
can prove more complex blocks (for example, those labeled as level 4
The two main bug-hunting techniques are bounded model checking,
in Figure 4).
where we prove that a set of assertions is safe out to some bounded
With this said, it is important to point out that an organization’s goal sequential depth; and dynamic formal, which combines simulation and
might be to eliminate as many bugs as possible prior to integrating formal verification to reach deep complex states.
various blocks into the system- or chip-level simulation environment.
Interface formalization. The goal of interface formalization is to harden
For these situations, formal is used not to prove a block, but to flush out
your design’s interface implementation using formal verification prior
bugs. This use of formal only requires level 3 skills, and can improve
to integrating blocks into the system simulation environment. In other
productivity by eliminating bugs sooner. Hence, to successfully apply
words, your focus is purely on the design’s interface (versus a focus on
formal, you must consider the existing skill set of your organization.
internal assertions or block-level, end-to-end properties). The benefit of
Obviously, it should be a goal to continue to mature an organization’s
interface-formalization is that you can reuse your interface assertions
process capabilities in order to improve productivity.
and assumptions during system-level simulation, dramatically reducing
integration debugging time.
HOW TO APPLY FUNCTIONAL FORMAL Improved coverage. Creating a high-fidelity coverage model can be
There are different degrees (or strategies) for which formal verification a challenge in a traditional simulation environment. If a corner-case or
can be successfully applied to your designs—ranging from improving complex behavior is missing from the coverage model, then it is likely
coverage, formalizing interfaces, bug-hunting, or full-proof. However, that behaviors of the design will go untested. However, dynamic formal
prior to choosing a strategy, I recommend that you first classify the key is an excellent way to leverage an existing coverage model to explore
properties identified in your verification plan. complex behaviors around interesting coverage points. The overall
benefits are improved coverage and the ability to find bugs that are
To help order your list of properties, answer the following questions:
more complex.
1. Did a respin occur on a previous project for a similar property?
(high ROI)
SUMMARY
2. Is the verification team concerned about achieving high To achieve 100x improvement in verification productivity, the industry
coverage in simulation for a particular property? (high ROI) must move away from a model that adds more cycles of verification, to
3. Is the property control-intensive? (high likelihood of success) a model that adds more verification per cycles (that is, maximizing the
meaningful cycles per second). Functional formal verification (such as
4. Is there sufficient access to the design team to help define Mentor Graphics’ 0-In™ Formal Verification tool) is one technology that,
constraints for a particular property? (high likelihood of success) when used effectively, can offer significant improvements in verification
productivity. The key to success requires understanding what, when,
After ordering your list, assign an appropriate strategy for each
and how to effectively apply formal verification.
property in the list based on your project’s schedule and resource
constraints. Your verification goals, project schedule, and resource
constraints influence the strategy you select. I recommend you choose
from one of the following four strategies.
Full proof. Projects often have many properties in the list that are
of critical importance and concern. For example, to ensure that the
design is not dead in the lab, there are certain properties that absolutely
15
Intelligent Testbench Automation Turbo-Charges Simulation
by Mark Olen, Product Manager, Verification, Mentor Graphics
Several years ago advances in testbench automation enabled when algebraically constrained. There’s actually no reason for inter-
verification engineers to test more functionality by dramatically CPU communication during simulation, because randomized sequence
increasing the quantity of testbench sequences for simulation. Through generation has no knowledge of what has been previously simulated on
clever randomization techniques, constrained by algebraic expressions, its own CPU. Therefore, inter-CPU communication is moot. Hundreds
verification teams were able to create testbench programs that generated (or thousands) of simulations are left to run overnight, over weekends,
many times more sequences than directed testbench programs could. or even over weeks, generating redundant sequences at a rate that
While actual testbench programming time savings and simulation increases every minute.
efficiency were hotly debated topics, few argued that constrained
The impact on debugging is equally devastating. Imagine returning
random test generation did not generate orders of magnitude more
to the office on Monday morning to view the results of the preceding
testbench sequences.
weekend’s simulation run. Suppose that one CPU in a simulation farm
However, addressing the testbench sequence generation of 100 or more CPUs failed after 36 hours of simulation. What is the
challenge through quantitative means (i.e. “more” sequences) caused next step? How does one debug a 36-hour long simulation of random
corresponding challenges during the simulation and debug phases of tests? The current “state-of-the-art” technique is to write a script file that
functional verification. Even when constrained by algebraic expressions, terminates each simulation on each CPU, and logs the results, every
random techniques tend to increasingly generate redundant testbench hour or so. Then the script assigns yet another seed to each CPU, and
sequences during the course of simulation, thus more simulators are restarts the simulation for another hour. This reduces the amount of
required to run for longer periods of time to achieve verification goals. In simulation data faced by the verification engineer, to an hour or so.
addition, even when using “directed” constrained random techniques, it is
Previous articles in Verification Horizons introduced a recent
difficult to pre-condition testbenches to “target” interesting functionality
breakthrough in functional verification called intelligent testbench
early in the simulation. The mathematical characteristics of constrained
automation. These articles describe how one of Mentor Graphics’
random testing that enable the generation of sequences the verification
newest product lines called inFact™ iTBA achieves superior functional
engineer “hadn’t thought of”, are the very same characteristics that
coverage by algorithmically traversing multiple rule-graphs, and
make it difficult to control and direct the sequence generation process.
synthesizing testbench sequences on-the-fly during simulation. The
As a result, it is not uncommon to see “production” simulation farms rule-graphs are derived from interface descriptions, bus protocols, and
of over 100, or even 1000 simulators, running for several days, or even functional specifications. And while rule-graphs are considerably smaller
weeks. For a design of even moderate complexity, the theoretical than conventional testbenches, they allow large quantities of sequences
number of sequences can be staggering. Most verification engineers to be generated. However, unlike traditional constrained random test
are well aware of the infamous testbench that could consume every techniques, rule-graphs enable non-redundant sequence generation,
computer on Earth, run for millions of years, and still not finish. eliminating significant waste of simulation time and resources.
But while the focus today on testbench automation is all about The latest advances in intelligent testbench automation now
languages, the impacts of testbench automation on simulation farms enable large simulations to be automatically distributed across up to
and the debugging process are conveniently overlooked. While it is 1000 CPUs, extending non-redundant sequence generation to entire
wasteful for a testbench toolset to generate redundant tests for a simulation server farms. This massive gain in efficiency is attributable
single simulator on a single CPU, imagine how much waste occurs in to the inherent architecture of rule-graphs and new advanced traversal
a simulation farm with hundreds of CPUs simulating in parallel, with algorithms tuned for production applications. Rule-graphs contain a
each CPU having no knowledge of what the others are doing. Today’s highly compressed description of a simulation “grammar” and “syntax”.
“state-of-the-art” technique is to assign a different seed to each CPU, to Traversal algorithms stitch these together into “sentences”, complete
begin random sequence generation from different starting points. But with stimulus and expects, in real-time for the simulator. Multiple rule-
once started, randomized sequence generation runs open-loop, even graphs may be instantiated during simulation, enabling generation of
16
sentences that create interesting system-level functional verification precisely ten thousand sequences. The spatial distribution algorithm
scenarios including cross-product scenarios, corner-case scenarios, prevents repetition of sequences on any given simulation CPU, and the
and more. module-N distribution algorithm prevents repetition of sequences across
the entire simulation farm. In addition, by performing the entire
simulation without unfolding the rule-graphs, the efficiency loss
due to overhead is less than 1%. The same simulation of one
million sequences that requires one thousand hours of run-time
on a single CPU can be completed in just minutes longer than
ten hours on a simulation farm of one hundred similar simulation
CPUs.
18
Using Questa Multi-View Verification Components and OVM
for AXI Verification by Ashish Kumar, Verification Division, Mentor Graphics
On February 18, 2008, Mentor Graphics introduced a new generation Transaction-based verification methodologies, such as the OVM, use
of Verification IP called Multi-View Verification Components (MVC). TLM interfaces as the communication mechanism between verification
The MVC was created using Mentor’s unique Multi-View technology. components. By using the OVM’s layered approach, the testbench code
Each MVC component is a single model that supports the complete developed can be reused throughout the design cycle, lowering the
verification effort at the system, transaction, and register transfer levels. overall cost typically required by RTL-oriented methodologies.
The MVC supports automatic stimulus generation, reference checking,
Figure 1 shows a typical configuration of the AXI MVC, along with
and coverage measurements for popular protocols, such as AMBA™
some of the verification components delivered with the MVC (including
with AHB, APB/APB3, and AXI.
the SystemVerilog AXI interface, driver, responder, monitor, and tasks
This article highlights the ways to create a reusable SystemVerilog and for directed and constrained-random stimulus generation, as well as
OVM-based, constrained-random verification environment for AMBA3 coverage groups and configuration).
AXI using the AXI MVC. More detailed information can be found in the
MVC Databook. MVC enables fast test development for all aspects of
the AXI protocol and provides all the necessary SystemVerilog classes,
interfaces, and tasks required for both directed and constrained-random
testing, as required for AXI master and slave unit verification at the RTL
and TLM abstraction levels.
19
module env;
bit clk ;
bit reset_n ;
// User DUT
verilog_master_wrapper #(ADDR_WIDTH, RDATA_WIDTH, WDATA_WIDTH,
ID_WIDTH)
AXI_master (.iAXI_imaster_mp(axi_if.iAXI_if.master_mp));
// create random write transactions the following AXI MVC task can be used
task rand_write_transaction(
input this_axi_request_t gen= null,
The user typically attaches design-specific coverage objects to the output this_axi_response_t resp
monitor using the standardized features of OVM. );
The OVM build() method creates the various verification // controlled read transaction used for directed tests
task put_read_request(
components.
input addr_t address,
input id_t id,
input axi_len_e len,
function void build(); input axi_size_e size,
super.build(); input axi_prot_e prot,
responder = new (“responder”, this); input axi_cache_e cache,
write_addr_wait = new (“write_addr_wait”, this); input axi_lock_e lock,
read_addr_wait = new (“read_addr_wait”, this); input axi_burst_e burst
write_data_wait = new (“write_data_wait”, this); );
…
endfunction
OVM connect() connects the components. Note that the AXI virtual LOGGING, VISUALIZATION, AND DEBUG
interface is already in the configuration class and can be retrieved WITH AXI MVC
through the OVM configuration mechanism. After compilation, the testbench initializes and the verification starts.
If the AXI interface was configured successfully, It will be flagged by
“bit ‘okay’.” As verification progresses, the MVC logs AXI transaction
function void connect(); activity to the Questa transcript with all relevant details; such as, AXI
axi_vip_config_t axi_vip_config_local;
burst mode, address, and atomic mode. The OVM reporting functionality
super.connect();
axi_vip_config_local = get_axi_config(); makes it easy to control and integrate the logged information into, for
axi_if =axi_vip_config_local.this_axi_if; example, the overall verification management environment.
monitor.axi_if = axi_if;
responder.axi_if = axi_if;
wait_coverage.axi_if = axi_if;
write_addr_wait.axi_if = axi_if;
21
The AXI MVC also includes a verification plan and an open source
# OVM_INFO @ 4565: env.monitor [AXI_TRANSACTION_MONITOR]
SystemVerilog coverage object, which the user can tailor to his or
(axi_transaction_mon) {
#} her particular application to get protocol specific coverage. Figure 5
# shows the verification plan and coverage model loaded into Questa’s
# ------------------------------------------------------------------------------------------------ verification management environment.
# Name Type Size Value
# ------------------------------------------------------------------------------------------------
# <unnamed> axi_request - @{} 18446744073709+
# Transaction Type string 8 AXI_READ SUMMARY
# Address integral 32 ‘h170
# Transaction ID integral 2 ‘h1 With the AXI MVC, the OVM, and less than 100 lines of SystemVerilog
# Transaction Length string 13 AXI_LENGTH_16 code, users can now create a complete AMBA3, constrained-random,
# Transaction Size string 11 AXI_BYTES_4
AXI verification environment that can be reused to verify RTL and TLM
# Protection Mode string 17 AXI_NORM_SEC_DATA
# Cache Mode string 19 AXI_NONCACHE_NONBUF AXI master and slave units. More details on the individual capabilities
# Burst Mode string 8 AXI_INCR of the MVC and the Questa Verification Platform can be obtained by
# Atomic Access Mode string 10 AXI_NORMAL contacting your local Mentor Graphics sales office. In an upcoming
# ------------------------------------------------------------------------------------------------ article we will show how easily the environment can be extended to
include verification of AHB and APB AMBA sub-systems, creating a
complete, reusable, constrained-random, AMBA SoC verification
environment using OVM.
The MVC also extends Questa’s transaction displaying
capabilities with AXI protocol specific debugging, see figure
4. The MVC keeps track of any AXI protocol activity (for
example, transactions started and expected responses)
between any layers of the AXI protocol, enabling the user
to always work at the highest level of abstraction. However,
when unexpected errors occur, determining the cause and
effect between high-level transactions and low-level pin-
activity is greatly improved.
22
What is a Standard?
by Mark Glasser, Mentor Graphics Corporation & Mark Burton, GreenSoCs
The engineering and computing worlds are filled with standards— De facto standards typically have a loyal group of users who rely on
from COBOL to SystemVerilog, from RS232 to AMBA. As engineers, them in their work and want to see them remain viable and stable. They
not a day goes by when we don’t apply a standard of some sort in our must be developed quickly and be responsive to change. In contrast, a
work. de jure standard is typically developed by a committee that deliberates
over the details and produces a document or other deliverable (such
What makes a standard a standard? The simple but maybe not so as a reference implementation), which represents the results of their
obvious answer is that something is a standard if everyone agrees it is. deliberations. In both cases, the discussion and development may
Is that enough? Who is everyone? To answer these questions we’ll take follow more or less rigid rules. For instance, development in the Debian
a brief look at a few standards and see how they came to be considered branch of the Linux community is quite tightly managed even though the
standards. Linux operating system is a de facto standard.
In the functional verification world, the Open Verification Methodology A de facto can become a de jure standard. For example, Cadence’s
(OVM) was recently released as a joint production of Cadence Verilog language, which started as a sponsored de facto standard, is
Design Systems and Mentor Graphics Corporation. As a verification now an agreed upon de jure standard, ratified by the IEEE. In fact, this
methodology for SystemVerilog users1, OVM generated a lot of buzz at progression through the types of standards is quite common.
the recent DVCon conference in San Jose, CA. Although just released
the first week of January 2008, as of the end of May, over 3000 people
have downloaded copies. In this article we show parallels between EXAMPLES OF STANDARDS
OVM and other well known standards and argue that OVM is on the
VHDL is an example of a de jure standard. In the early 1980s, the
same trajectory toward standardization.
US government was looking for a vendor- and tool-independent HDL to
enable second sourcing of IC designs. The development of the VHSIC
TYPES OF STANDARDS (Very High Speed Integrated Circuit) Hardware Description Language
was commissioned, and VHDL IEEE-1076-1987 was mandated by the
There are four types of standards2, which break into two main US Department of Defense in document DOD 454 as the medium for all
categories — de facto and de jure. De facto standards are either IC designs to be delivered as part of military contracts3.
sponsored by a company or organization or un-sponsored. When a
company makes a donation of previously proprietary technology as a Linux, a well known de facto standard, started its life as MINIX; a
standard, this would be a sponsored standard. Verilog is an example of simple, UNIX-like operating system written by Andrew Tannenbaum for
a sponsored standard in the EDA domain. It was initially developed by the purposes of educating his students about how operating systems
Gateway Design Automation and then Cadence Design Systems. It was work4. Tannenbaum made the source code (in C) freely available. Linus
a de facto standard prior to Cadence’s sponsorship of it as a standard Torvalds was one of the people who downloaded MINIX and began to
to OVI (which later became Accellera). Linux, as we will see later, is an tinker with it. By 1991, he wrote his own MINIX-like OS and released
un-sponsored de facto standard. it as open source code under the GNU GPL license. Linux has since
grown and become an industrial-strength body of code upon which
De jure standards can either be legally enforced or simply agreed countless applications (and fortunes) have been built.
upon. For instance, it is illegal to sell an electrical appliance in the UK
that does not have a “standard” compliant plug on it, in accordance with Like Verilog, the C programming language has lived its life both as a
BS 1363-4:1995. On the other hand, ASCII, the character set used by de facto standard and a de jure standard. The C language appeared in
most computers, is an ANSI de jure standard; yet it would be perfectly 1973 as a derivative of BCPL5. In 1978 the book The C Programming
legal to sell a computer that did not use it (though Lyndon B. Johnson Language6 was published after C had already been in use for 5 years. It
would not have allowed the US government to buy such a machine). wasn’t until 1983 that ANSI formed the XJ311 committee to build a ratified
However, very few, if any, actually do. standard for C. The committee finished its work in 1989, producing the
23
X3.159-1989 standard for the C language. In the approximately 16 years 1. They provide value to their users
of its existence before becoming a ratified standard C was already one
2. They are easily accessible and applicable (well documented)
of the most important programming languages ever invented. By the
time the standard was completed many millions of lines of C code were 3. And…they are fun!
in use in production systems all across the world, many books on C
were published, and the C language was already a staple of college and C and Linux, along with standards such as HTML, SMTP, XML, DNS,
university computer science and engineering programs. and a long list of many others, became de facto standards without any
organization behind them or long before any standards body became
interested in them because they captivated the imaginations of their
KEYS TO A SUCCESSFUL STANDARD users, making them fun to use!
How did C and Linux become pervasive as standards without the
People in search of solutions for various kinds of engineering
benefit of a recognized standards body behind them? Conversely, would
problems, upon learning about these incipient standards, had an “aha!
VHDL have even existed, much less enjoyed any popularity as a design
moment.” They quickly realized, standard or no, that these facilities
medium, had its use not been mandated by the US Government?
provided value. They didn’t require a standards body or a government
Within the story of C is a lesson about why some so-called standards mandate to tell them this was something they needed.
fail to wear that mantel. In 1975, the US department of defense set up
For example, The C Programming Language is highly readable, a
a “standards” organization called the High Order Language Working
departure from the typical compiler reference manual of the time. The
Group. The intent was to devise a language for use within the US
book presents a straightforward model of the language which is easily
government for all embedded systems. The language ADA was the
grasped. Readers could become reasonably proficient in C through
result. But while ADA was a modestly popular programming language,
self study. It’s not clear whether the popularity of the book caused
touted as self-documenting and highly error resistant, the “standard”
C compilers to proliferate or whether the availability of the compiler
was short-lived as the more popular C became the de facto standard.
motivated people to seek out the book. In either case, the openness of
As we have said at the top of this article, something is only a standard
the language definition via the book and the freely available compilers
if everybody agrees it is. Sowa describes this phenomenon in his short
contributed to the widespread proliferation of C.
article “The Law of Standards.”7
The latter is another characteristic de facto standards typically have
Sponsored standards often face an uphill battle for community
in common: they are freely available, often in open source form. It
acceptance due to a perception of openness or lack thereof. Consider,
is likely that we would never have heard about Linus Torvalds or his
for example, Microsoft’s proposal of OOXML as an open, de facto
operating system had potential users not been able to download a copy
standard A quote from Sarah Bond, platform strategy manager for
and use it. Not only could they download it freely, but because they had
Microsoft, rather understates the case. She said, “Perhaps Microsoft
the source code, they could port it to various machines, augment it with
hasn’t communicated as best as it could have about the openness of
new features, and fix bugs. They could not only get excited about the
OOXML”8.
idea of a free UNIX-like operating system, they could control it and apply
Acceptance of a standard requires the perception, as much as the it to suit their needs.
reality, of accessibility. The fear of “lock in”—whether real or imagined—
can be as damaging to a fledgling standard as poor licensing. Of course
this is an old story. In 1975, Sony tried to introduce a sponsored de
BIRTH OF A NEW STANDARD?
facto standard. It offered the standard to its rivals, and while there were Clearly, OVM has captured the imagination of the verification
reports of licensing issues, what caused the demise of beta-max over community. How can we account for this excitement? After all, it’s not
the inferior VHS format seems to have more to do with perception than the first entry in the SystemVerilog verification methodology arena.
anything else. VMM and AVM have been available for several years and each has
enjoyed success within the verification community.
As to why C and Linux became pervasive as standards without the
benefit of a recognized standards body, we can observe from history It is precisely because the OVM shares many characteristics of
that there are at least three key features of a successful standard. well known de facto standards, such as C and Linux. It is available in
24
open source form under the Apache-2.0 license. This license provides NOTES
protection for the copyright holders but imposes very few restrictions
on its licensees. As with Linux, users can control it and modify it to suit 1. OVM World, http://www.ovmworld.org/
their needs. 2. Takanori Ida, Evolutionary stability of de jure and de facto
The OVM is supported through a unique collaboration between standard, Konan University, Faculty of Economics, http://www.econ.
Cadence and Mentor Graphics, two of the largest producers of kyoto-u.ac.jp/~ida/3Kenkyuu/3Workingpaper/WPfile/2000-2001/
verification tools. Thus, OVM is not proprietary to any one company, standards.pdf
which is an attractive proposition to many users. It is a sponsored de 3. Coehlo, David R. The VHDL Handbook, Springer, 1989
facto standard in the making. As such, we can expect it to be dynamic
and its development swift. Companies that are reluctant to invest in 4. Hasan, Ragib, The History of Linux, 2002, Department of
writing code using a proprietary language or library can now avoid Computer Science University of Illinois at Urbana-Champaign, https://
the problem of feeling “locked in” to a particular vendor when they use netfiles.uiuc.edu/rhasan/linux/
OVM.
5. Ritchie, Dennis, M. The Development of the C Language, 1993,
The recent formation in Accellera of the Verification Intellectual http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
Property Technical Subcomittee (VIP-TSC) does nothing to alter the
6. Kernighan, Brian R, Ritchie, Dennis M. The C Programming
trajectory of OVM’s rise as a de facto standard. Because OVM is open-
Language, 1978, Prentice Hall, Englewood Cliffs, NJ
source OVM will follow the trajectory of similar open source tools. Like
the C language, OVM will ultimately be strengthened by a standardization 7. Sowa, John F. , The Law of Standards, http://www.jfsowa.com/
effort by providing short-term interoperability and a longer-term migration computer/standard.htm
path from other, perhaps proprietary, methodologies to the OVM.
8. ZDNet, Proprietary past looms over Microsoft OOXML
The inevitability of OVM becoming a de facto standard for building hopes, February 28, 2008, http://news.zdnet.co.uk/
testbenches is almost assured based on a review of the history of itmanagement/0,1000000308,39359136,00.htm
computing standards. History shows us that the standards that have
thrived are those that effectively solve a common computing problem,
are not proprietary, and are open source. Clearly OVM has achieved
the market presence and momentum reflective of this pedigree, and
the formation of the VIP-TSC is further evidence of this. Community
participation in the formulation of a standard will protect users’ legacy
investments while facilitating the growth of OVM as the de facto standard
verification methodology.
25
Firmware Driven OVM Testbench
by Jim Kenney – SoC Verification Product Manager, Mentor Graphics
The Open Verification Methodology promotes a well defined implicit-access mode eliminates the need to call a function that explicitly
SystemVerilog transaction-level interface, inviting integration of a host sends read/write transactions to the hardware. Implicit access enables
of verification technologies. Firmware has proven effective for functional the same code to be run in simulation and on a live target without
verification of embedded hardware, so it follows that OVM integration modification.
of a firmware execution environment will advance the verification of
Running firmware native on the host is the fastest mode, executing
embedded systems. To this end, Mentor Graphics has added OVM-
at Pentium speeds of ~2 GHz. However, it’s worth noting that the logic
compliant interfaces to the Seamless® HW/SW co-simulation tool. This
simulator, even at the transaction level, is dramatically slower than most
article covers the firmware execution modes, supported processors,
firmware execution modes and will dominate overall runtime.
and interface points of the Seamless/OVM integration.
There are two primary ways to execute firmware in this environment: In general, RTL processor models don’t support source debug, but
host-code execution and target instruction-set simulation (ISS). With Questa Codelink includes patented technology that delivers source-
host-code execution, the firmware is compiled to run native on the level debug for MIPS and ARM RTL processor models. Codelink adds
workstation, which most likely would be a Pentium-based machine. software debug to the ModelSim/Questa user interface and connects
Seamless includes an exception handler in the form of a shared library directly to existing VMC and RTL models as delivered by ARM and
that is linked with your firmware. Each time your code initiates a read/ MIPS. Figure 1 features a screen shot of Questa Codelink connected to
write access to the hardware, an exception will occur, which Seamless an ARM 926 Design Simulation Model (DSM).
intercepts and routes to the design modeled in the logic simulator. This
26
ACCELERATED MEMORY ACCESS
At the heart of Seamless is a Coherent Memory
Server. This device acts as a unique storage array for
memories modeled in the logic simulator. Memories that
are frequently accessed by the processor can have their
storage arrays mapped to this server in place of the logic
simulator’s native storage array. Firmware executing
on the host or ISS can access this storage thousands
of times faster than a comparable simulation memory
cycle. This not only speeds firmware execution, it enables
rapid initialization, loading, and examination of memories
modeled in the logic simulator. For example, a dual-port
video frame buffer that is loaded by the CPU and read by
a HW image processor can be loaded in zero simulation
time.
27
SUPPORTED PROCESSORS
Mentor has developed a comprehensive offering of Seamless pro-
cessor models. Because the OVM transaction interface is implemented
in the Seamless kernel and not in each processor model, all models can
be used with OVM. Supported cores include the following:
SUMMARY
Seamless enables straight-forward integration of firmware stimulus
into an OVM testbench. Multiple firmware execution modes provide
a continuum of performance/accuracy choices. Firmware-generated
stimulus can be applied to the design as transactions or pin-level bus
cycles. An OVM TLM testbench can be used to drive available Bus
Interface Models that convert transactions to pin-level bus cycles.
Seamless supports mixed-mode stimulus where an address map
applies processor cycles to the design as OVM transaction or pin-level
bus cycles. Source-level firmware debug is available for all processor
representations, including the Questa Codelink debugger for RTL
processor models.
28
Design Verification Using Questa and the Open Verification
Methodology: A PSI Engineer’s Point of View
by Jean-Sébastien Leroy, Design Verification Engineer & Eric Louveau, Design Verification Manager, PSI Electronics
ABSTRACT Verification Language. We will then explain how OVM and Questa
helps us to achieve better results while speeding-up the verification
This article discusses, from a design verification engineer’s point process by using the SystemVerilog DPI feature to replace the fully-
of view, the benefits of using Questa and the Open Verification functional LEON processor model and by re-using our HDL verification
Methodology in the verification process. This article shows how Questa IPs.
and its advanced features such as integrated verification management
tools and integrated transaction viewing, can help to achieve verification
plan targets. OVERVIEW OF NEW QUESTA FEATURES
Questa also enables custom flow integration, such as the PSI-E Following is a very short overview of the most interesting features
verification management flow. Coupling with a methodology like OVM, for System-on-Chip verification. Questa enables transaction viewing
it is possible to deliver a complete reusable verification environment in and recording (see figure 1), achieving easier debugging of bus and
an efficient way whatever the project. OVM is a standardized verification protocols.
methodology, enabling the verification environment to be reused, even
Questa comes with new verification management tools, enabling
in a different EDA vendor flow.
strong coupling between code coverage, functional coverage, simulation
OVM provides many verification components and gives the runs and verification plan. This tools support many plan file formats
verification engineer a way to think about how to verify and what to such as excel sheet or GamePlan file.
verify instead of thinking how to write verification components. The only
The latest version of Questa integrates the support for the Universal
things to code are drivers, which depend on the device under test. In
Power Format for power-aware simulation.
this article, we are using a simple example: the test of an UART IP
connected to the APB bus of a LEON-2 system-on-chip. We will
explain how verification was done before using OVM and Hardware
29
Figure 1-Transaction viewing in the wave window
PRESENTATION OF THE
OPEN VERIFICATION METHODOLOGY
The Open Verification Methodology is the result of joint
development between Cadence and Mentor Graphics. OVM is
a completely open library and proven methodology based on the
Universal Re-use Methodology from Cadence and on the Advanced
Figure 2-OVM library (source www.ovmworld.org)
Verification Methodology from Mentor Graphics. OVM has been
developed in order to provide true SystemVerilog interoperability and
to facilitate the development and usage of plug-and-play verification
IP written in SystemVerilog, allowing the communication of foreign AUTOMATED INTEGRATION OF THE
languages such as SystemC and e languages. OVM addresses some VERIFICATION PLAN IN THE VERIFICATION FLOW
issues encountered when using proprietary languages such as: PSI Electronics has developed a custom verification management
• Different language subset flow for easier and quicker verification goals achievement. This flow is
• Incompatible VIP interface based mostly on open source alternative, except for Jasper GamePlan
• Vendor-dependent library/methodology software which is not open source but is freely available for download.
• Prohibitive licensing The flow enables original and pleasant viewing of the verification plan
using mind-mapping viewing and icon classification and provides back
annotation of metrics results.
OVM provides a true open-source (Apache-2.0) library and
methodology written in SystemVerilog and that can run on any IEEE-1800 The PSI-E flow is composed of four steps:
compliant simulator. OVM ensures interoperability across simulators • Moving specification and features to verify in a verification plan
and with other high-level languages and enables verification IP “plug- in FreeMind
and-play” functionality. OVM is based on proven methodologies, URM • Converting FreeMind file format in GamePlan file format using
and AVM, therefore, incorporates best practices from more than 10 PSI-E XSLT style-sheet
years of experiences. • Running simulation and retrieving coverage results in GamePlan
OVM provides base class for verification environment elements (see using Questa UCDB
figure 2) such as: • Back-annotating results in FreeMind using XSLT style-sheet
30
The third step uses the Questa verification management feature
to define relations between verification plan items and functional/
code coverage results. This is easily done thanks to the verification
management tool of Questa, able to read a GamePlan verification
plan.
Notes can also be added to each feature using the post-it icon.
32
The processor-driven methodology has many advantages. There time consumed to verify the design. Targeting 100% of code coverage
is no need to know or learn a verification language. You just need to gives a feedback on when we are done so that not all tests need to
have a processor and some C knowledge. By the way, the tests can be exercised to be sure that the IP has been fully exercised. Moreover
be reused in other platforms and can even be used in the real chip automatic results reporting reduces the amount of time spent to annotate
later. But there is a real tedious and disappointing thing with processor- the verification plan. In fact, all coverage and pass/failed results are
driven methodology. Because the processor is running, compiled C automatically reported in the right place in the verification plan and the
code feeds into a memory model instantiated in the top level testbench, end of the tests. Then, any format of document can be generated: a
there is no way to add breakpoints in the source code, no way to view a back-annotated FreeMind map, a word report and even an automatic
variable content, and by the way no source-level debug. This can be a wiki report.
real problem when verifying complex IP such as Ethernet PHY or MP3
But VHDL isn’t a verification language and setting up a complex
decoder, especially when using pointers or complex calculation in the
environment for a complete SoC is a pain. For example, replacing
source code. So, before verifying the IP, we need to first debug our tests
the processor by a bus functional model requires a lot of coding and
and this process can take long time.
debugging time. VHDL doesn’t provide transaction-level modeling nor
As a feedback on our job, we are using code-coverage metrics that a simple way to do constraint random verification. But all of these can
tell us if we are done testing the device or not. Code-coverage is a first be done using VHDL if desired, but at the price of consuming a lot of
available metric providing a measure of quality test and remaining job. time, not to mention forgetting about re-use. What you will finally have is
a sort of custom and proprietary verification language based on VHDL
Referring to the verification map, it is evident that some conditions and command files that only your BFM can work with.
cannot be tested or cannot be easily tested such as low baud-rates.
Moreover, a huge number of tests need to be exercised in order to
exhaustively test the IP. The total number N of tests can be calculated VERIFICATION USING HVL AND OVM
as follow:
On the other hand, OVM greatly helps verification efficiency and
productivity by enabling verification IP re-use while providing a very
powerful environment. A good way to start with OVM is to read the
N = (12 bits divider) x (8 data bits) x (None, Odd, Advanced Verification Methodology book from Mentor Graphics as well
Even parity bit) x (loopback) as the Universal Re-use Methodology guide from Cadence.
The first step is to move the VHDL top level testbench to SystemVerilog.
This can be quickly achieved by re-using all VHDL models (memory,
Replacing values, we have a number N of tests:
bfm …) and re-instantiating them into a SystemVerilog top. Re-using
components is really a great deal because it speeds up the process
to have a working environment while enabling one to re-use working
N = (1212) x (28) x (3) x (2) = 4096 x 2 x 256 x 3 = 6,291,456 complex models that are already coded. Here, a sanity check can
be launched to ensure that all is working properly like the old VHDL
testbench.
We need to exercise more than 6 million different tests. We haven’t
The next step is to setup OVM. An OVM environment is basically
calculated the time necessary to complete this test but you can imagine
made by inheriting provided classes. The only class to overload is the
how long it would be, especially when entering low baud-rate tests such
ovm_driver one to adapt it to the device(s) under test. Thanks to the
as 1200 bps, where a bit-period is 833μs.
object-oriented structure of SystemVerilog, only the printing method and
Using our flow, we can greatly reduce the number of tests to be low-level signals driving methods need to be coded, since other useful
exercised and, by the way, the time needed to simulate. Tests can functions are already being provided by the class members methods.
first be reduced by constraining the search space. For example, only
So we have here a first SystemVerilog/OVM environment. But in this
standard baud rates such as 9600 bps, 19200 bps, 57600 bps and so
environment, we are always using VHDL models, such as memory
on are interesting to be verified. Then setting-up RTL code coverage
models, to have the system-on-chip running. While OVM enables re-
and integrated PSI-E flow helps to reduce the number of tests and the
33
using VHDL verification components, it would be better to have them CONCLUSION
all coded in SystemVerilog, especially due to limitations inherent to
VHDL such as no hierarchical path. A way to have a SystemVerilog only Mixing OVM, SystemVerilog and Questa together is a powerful and
environment would be to rewrite all memory models, all clock generators, efficient way of achieving reliable design verification. Questa provides
all verification IP and all bus functional models in SystemVerilog but the verification engineer with many features that help debugging
this is a time consuming process due to coding and debugging new and managing simulation such as transaction viewing, memory
models. content viewing and modifying, custom radix definition or verification
management tools. With these features, it is possible to integrate a
Because we are looking for ways to speed up the process while custom flow, such as the one used by PSI Electronics, and in that way
maintaining high verification quality and re-use, at PSI Electronics we to achieve automatic tasks.
have sought to reduce the time spent writing code. We have found a
solution by keeping our UART BFM, in order to reduce the time we’d As shown, using FreeMind and XML standards permit you to build
spend to write a new one, with the SystemVerilog Direct-Programming friendly verification management flows. About the methodology and the
Interface. Using DPI, we just need to write a master AHB bus functional mechanism: HDL shows its limits quickly when thinking about re-using
model which will be driven with already-coded C tests (almost the same or complex devices-under-test. It is always possible to develop complex
one that was used in the VHDL testbench). In that case, the verification environments and to make complex scenarios evolving random-
process and environment setup time is greatly reduced because we constraint verification or transaction-level modeling but this is costly.
don’t need processor program memory models for example. For this reason, OVM and SystemVerilog is a good alternative. OVM
provides a ready-made solution to set up a complex but reliable and
By the way, we do not need to write and debug all models. Just the re-usable verification environment. OVM is shipped with a huge number
minimum necessary models are translated into SystemVerilog. In that of methods and components that enable the verification engineer to
case, only the AHB BFM needs to be programmed. So, we are working quickly develop complex and reliable verification environments and to
with a verification environment that is bug-free quickly, enabling faster setup complex scenarios on a complex design.
and better verification results. We can then concentrate on translating
all our models into SystemVerilog while being able to achieve good
verification in the mean time.
34
A Practical Guide to OVM – Part 1
by John Aynsley, CTO, Doulos
OVM was created by Mentor Graphics and Cadence based on existing In order to compile OVM applications using Questa, the approach we
verification methodologies originating within those two companies, recommend is:
including Mentor’s AVM, and consists of SystemVerilog code and
• to add ./src to the include path on the command line, that is,
documentation supplied under the Apache open-source license. The
+incdir+/.../src
official release can be obtained from the website, www.ovmworld.org.
• to add ./src/ovm_pkg.sv to the front of the list of files being compiled
The overall architecture of OVM is well described in the Datasheet and
• to add the following lines to your various SystemVerilog files
White Paper available from that website.
35
- Verification environment (or test bench)
interface dut_if();
- Transaction
- Driver int addr;
- Top-level of verification environment int data;
bit r0w1;
- Instantiation of stimulus generator
- Instantiation of driver modport test (output addr, data, r0w1);
modport dut (input addr, data, r0w1);
- Top-level module
endinterface: dut_if
- Instantiation of interface
- Instantiation of design-under-test
- Test, which instantiates the verification environment
Of course, a real design would have several far more complex
- Process to run the test
interfaces, but the same principle holds. Having written out all the
connections to the DUT within the interface, the actual code for the
Since this example is intended to get you started, some pieces of the outer layer of the DUT module becomes trivial:
jigsaw are missing, most notably a verification component to perform
checking and collect functional coverage information. It should be
emphasized that the purpose of this article is not to demonstrate the full module dut(dut_if.dut i_f);
power of OVM, but just to get you up-and-running. ...
endmodule: dut
36
dut_if dut_if1 ();
dut dut1 ( .i_f(dut_if1) ); class my_transaction extends ovm_transaction;
37
or components in the hierarchy are derived from the class ovm_env. The ovm_component_utils macro provides factory automation for
Objects of type ovm_env may themselves be instantiated as verification the driver. The factory will be described in a later article, but this macro
components within other ovm_envs. You can instantiate ovm_envs and plays a similar role to the ovm_object_utils macro we saw above for
ovm_components from other ovm_envs and ovm_components, but the the transaction class. The important point to remember is to invoke this
top-level component in the hierarchy should always be an ovm_env. macro from every single verification component; otherwise, bad things
happen.
A verification component may be provided with the means to
communicate with the rest of the verification environment, and may
function new(string name, ovm_component parent);
contain a set of standard methods that implement the various phases super.new(name, parent);
of elaboration and simulation. One such verification component is the endfunction: new
driver, which is described here line-by-line:
The build method is the first of the standard hooks called back in each
ovm_get_port #(my_transaction) get_port;
of the phases of elaboration and simulation. The build phase is when
all of the verification components get instantiated. This build method
The get_port is the means by which the driver communicates with the starts by calling the build method of its superclass, as build methods
stimulus generator. The class ovm_get_port represents a transaction- always should, then instantiates the get port by calling its constructor
level port that implements the get(), try_get() and can_get() methods. new. This particular component has no other child components, but if it
These methods actually originated as part of the SystemC TLM-1.0 did, they would be instantiated here.
transaction-level modeling standard. The driver calls these methods virtual task run;
through this port to fetch transactions of type my_transaction from the forever
stimulus generator. begin
my_transaction tx;
#10
get_port.get(tx);
38
The run method is another standard callback, and contains the main
behavior of the component to be executed during simulation. Actually,
run does not belong to ovm_component but to ovm_threaded_
component. Only threaded components have a run method that is
executed during simulation. This run method contains an infinite loop to
get the next transaction from the get port, wait for some time, then wiggle
the pins of the DUT through the virtual interface mentioned above.
NEXT STEPS
In this article we have examined the OVM verification environment,
transactions and verification components. We will pick up the story again
in subsequent articles, where we will explore OVM simulation phases,
tests, configuration, sequences, and the factory. In the meantime, you
can download the source code for this and other examples from www.
doulos.com/knowhow.
39
Dynamic Construction and Configuration of Testbenches
by Mike Baird, Willamette HDL, Inc.
There are a number of requirements in developing an advanced In a standard testbench structure the top level object is the
testbench. Such requirements include making the testbench flexible environment class which embeds the stimulus generator which contains
and reusable because with complex designs in general we spend as the algorithm (test) for generating stimulus objects. This limits flexibility
much or more time developing the verification environment and testing when changing tests. To change a test a new or different stimulus
as we do developing the DUT. It has been said that any testbench is generator must be compiled into the testbench structure. The use of
reusable; it just depends upon how much effort we are willing to put into polymorphism or stimulus generators that gather information from files
adapting it for reuse! Given that however, there are concepts that go etc. may help but is not a complete answer.
into making a testbench that is reusable with reasonable effort.
Traditionally a hierarchical class-based environment is built using an
1. Abstract, standardized communication between testbench object’s constructor, a special function which creates the object. Higher
components level components create lower level components by calling the lower
2. Testbench components with standardized API’s level component’s constructor.
3. Standardized transactions
In this approach the initial block of the top level module (top) calls the
4. Encapsulation
constructor of the test environment class (test_env), which in turn calls
5. Dynamic (run-time) construction of testbench topology
the constructor of its child objects and so on down the hierarchy. Once
6. Dynamic (run-time) configuration of testbench topology and
construction is complete the simulation phases begin where connection
parameters
of components, running and post run processing is done.
7. Test as top level class
8 .Stimulus generation separate from testbench structure This approach limits the flexibility of creating testbench components
9. Analysis components as their types are determined before the simulation phases begin,
essentially at compile time. To change a component type a change to
We will primarily address dynamic (run-time) construction of testbench the structural code and a recompile is needed. Polymorphism helps but
topology and dynamic (run-time) configuration of testbench topology again is not a complete answer.
and parameters in this article. In doing so we will lightly touch on test
as top level class and stimulus generation separate from testbench Configuration information is passed from higher level components
structure as these are related topics. to lower level components through the constructor arguments. Things
such as structural topology, setting parameters (array sizes, constants
etc.) and operational modes (error injection, debug etc.) are examples
TRADITIONAL APPROACH LIMITATIONS of what may be configured in this manner.
A standard testbench structure has a top level module (top). Inside top
With multiple levels of hierarchy and potentially many parameters,
are the DUT (alu), DUT interface (alu_if) and a top level class (test_env)
using constructor arguments becomes difficult. It becomes hard or
which is the test environment that contains the testbench components.
even messy to pass parameters down several levels and if we have
more than 2 or 3 levels of hierarchy the problem becomes progressively
worse. This approach does not scale well when adding additional
configuration parameters.
During the run phase of simulation the factory may be used to create
Figure 2: Test as Top Level Class stimulus objects as opposed to calling the constructor directly to create
the stimulus object.
Configuration data may be in the form of an integer value, a string or With the top level class being a test, using test environment
an object. An object may be used to encapsulate data that is not an configuration data together with test factory overrides provides each test
integer value or string. The higher level component stores the data with with the flexibility to configure the test environment to its requirements.
a unique lookup string and further specifies an OVM hierarchical path Once all the test bench component classes, stimulus classes and tests
relative to itself, specifying which of the lower level components in its have been compiled, different tests can be run without recompiling.
sub hierarchy is allowed to access this data. The lower level component
retrieves data using the lookup string. Its search for the data begins at
the top in the global space and proceeds down the hierarchical path
to the component. It retrieves the data at the first successful match it
finds. Wildcarding is allowed in both the lookup string and the path to
provide flexibility in setting and getting data.
42
Mike Baird has 26 years of industry experience in hardware design
and verification. He is the author and an instructor of Willamette HDL’s
OVM training course. He has 15 years experience teaching design and
verification languages including Verilog, SystemVerilog, OVM, C++ and
SystemC. He is the founder of Willamette HDL (1993 – Present)
43
OVM Productivity using EZVerify
by Sashi Obilisetty, VeriEZ
INTRODUCTION dispersed teams. It is not enough to have unwritten rules and policies
for such projects. It is imperative that companies have corporate-wide
Verification of a chip is easily the most time-consuming task enforceable policies to meet aggressive time-to-market schedules while
confronting the product team. Increasingly, verification engineers are continuing to develop consistent and reusable code.
using innovative technologies and newer methodologies to achieve
satisfactory functional verification. SystemVerilog is fast becoming the EZVerify aims to enable efficient design and verification by providing
language of choice for implementing verification projects. Its rich set innovative tools that fit into a typical design flow, while targeting
of verification-friendly constructs, IEEE standard status, and universal specifically methods for detecting errors in design and verification code,
support across multiple vendor platforms warrants its overwhelming and for promoting reuse.
acceptance.
VeriEZ’s EZVerify is a unique tool suite that offers OVM users a static
analysis tool (EZCheck) to perform over 30 OVM rule-checks and a
knowledge extraction tool (EZReport) to create persistent documents
that outline hierarchy and connectivity.
EZVERIFY – BACKGROUNDER
EZVerify is the industry’s first SystemVerilog productivity solution to
provide static analysis capability for design, assertions and testbench Figure 1 : EZCheck Use Model
capabilities as described in the IEEE 1800 standard. Its main capabilities
are:
EZCheck is a static analysis utility for SystemVerilog modules.
• Identifying coding errors early in the flow, giving beginner and On the design front, it can detect problems such as latch inference,
experienced users alike the opportunity to fix such errors with race-prone implementation and areas leading to synthesis/simulation
EZCheck, a programmable static linter for HDL-based modules discrepancies. On the verification front, it can detect problems such as
• Providing comprehensive documentation of design and erroneous usage of concurrency constructs, missing function output
verification information by analyzing the input modules (new, assignments, and several other serious errors that are not normally
legacy or external IP). uncovered without several hours of debugging.
EZCHECK: RULESETS
SystemVerilog projects are complex and the modules in a typical A collection of rules, or a ruleset, is the primary input to EZCheck.
HDL-based project can easily amount to thousands of lines of code. A rule can be thought of as a well-defined check; over and beyond the
In addition, projects invariably include multi-person geographically syntax and semantics of the language. For e.g., a missing semicolon
44
is a “syntax check” that will be identified by all parsers. Imposing • Value(s): Values are specified using the prefix ‘V’. Values may be
a restriction on the usage of certain constructs, however, is a “rule user-selectable or user-specified. An example of a rule with user-
check”. EZCheck comes standard with several rulesets, each of which selectable values is “SyncControl” with user-selectable strings
addresses best practices in various domains of SystemVerilog, such as “ALL”, “ANY”, “ORDER” or “CHECK”. For this rule, the user may
Functional Coverage, Assertions, Design and Verification. select one or more of the user-selectable strings as appropriate
controls for the “sync” call. An example of a rule with user-specified
values is “ProgramName”, which accepts regular expression
syntax for valid program names.
------------------------------------------------------------------------
#SyncControl rule with user-selectable string ‘ALL’
N:SyncControl
V:ALL
#ProgramName rule, should start with uppercase, and end with _prg
N:ProgramName
V:^[A-Z][A-Za-z]*_prg
------------------------------------------------------------------------
• Hint: The optional hint attribute is specified using the ‘H’ prefix. All
rules may have hints.
The following entry shows a rule with the optional hint attribute:
------------------------------------------------------------------------
Figure 2: Available Rulesets in EZCheck #SyncControl rule with user-selectable string ‘ALL’
N:SyncControl
V:ALL
H: Using ‘ALL’ options prevents deadlocks
Rulesets are ASCII files that can be created using any standard text ------------------------------------------------------------------------
editor. Rules are selected and customized in a ruleset. Rules can be
added and customized by setting appropriate values for various rule • Severity: The optional severity attribute uses the ‘S’ prefix and
attributes.: takes any string as its value. Typically, users will define a few
severity levels and assign these levels to the rules.
• Name: The name of the rule to be added; a different user-
supplied rule name may be bound to the predefined rule name. If a
The following entry shows rules with 2 severity levels, ERROR
user-supplied name exists, all violations and messages will use the
and WARNING:
user-supplied name.
------------------------------------------------------------------------
For example, the following entry binds the predefined rule #SyncControl rule with user-selectable string ‘ALL’
name ‘ValidPrintAndScanFunction’ to the user-supplied name N:SyncControl
‘SafeFunctionCall’. The prefix ‘N’ is used to identify the entry V:ALL
S:ERROR
as a rule name attribute.
#ProgramName rule, should start with uppercase, and end with _prg
------------------------------------------------------------------------ N:ProgramName
N:ValidPrintAndScanFunction:SafeFunctionCall V:^[A-Z] [A-Za-z]*_prg
------------------------------------------------------------------------ S:WARNING
------------------------------------------------------------------------
45
• Category: The optional category attribute ‘C’ is used to categorize
rules. Category names may be any user-specified strings. Typically,
user will predetermine the categories to be used and place rules in
appropriate categories.
------------------------------------------------------------------------
#SyncControl rule with user-selectable string ‘ALL’
N:SyncControl
V:ALL
C:Coding
#ProgramName rule, should start with uppercase, and end with _prg
N:ProgramName
V:^[A-Z][A-Za-z]*_prg
C:Naming
------------------------------------------------------------------------
Figure 3 : EZReport Use Model
46
Some of the errors highlighted by the EZCheck OVM rule-checker for
verification code are shown below:
Verification Environment Rules: These rules are designed to provide if(has_bus_monitor == 1) begin
compelling value by checking for classic lint-style errors. Rules include bus_monitor = new xbus_bus_monitor(“xbus_monitor”);
checks for the following occurrences in SystemVerilog code: end
slaves = new[num_slaves];
• Missing calls to super.() method, when applicable
for(int i = 0; i < num_slaves; i++) begin
• Creation of related sub-classes using the recommended $sformat(inst_name, “slaves[%0d]”, i);
‘create_component’ method from the ovm_factory $cast(slaves[i], create_component(“xbus_slave”, inst_name));
• Global timeouts are specified for individual tests using end
endfunction : build
recommended OVM routines
String ‘xbus_slave” does
endclass
not match the name of the
‘xbus_env’ missing
corresponding component
a constructor
‘xbus_slave_agent’
47
CONCLUSION
Productivity is one of the most important criteria used by end-users
when evaluating new methodologies. OVM promises a high productivity
environment for verification engineers. Adoption of a new methodology
comes with its challenges. EZVerify provides a solid infrastructure to
OVM users, and can be used consistently throughtout the company to
accelerate adoption of OVM.
48
Requirements-Centric Verification
by Peet James, Sr. Verification Consultant, Mentor Graphics
What is a design requirement? verification, it remains difficult for most verification teams to define there
What is a verification requirement? generation stimulus, correctness checks (scoreboarding & assertions)
How do I define a generation stimulus sequence or scenario? and especially coverage groups, points, etc. Blank stares and sheer
How do I decide what to check in a scoreboard, and what to check boredom typically result when most verification teams call a meeting to
in an assertion? brainstorm and identify all these verification requirements. A novel way
How do I define coverage points? of extracting design requirements, and subsequently translating them
into corresponding verification requirements is being used successfully
by some groups.
In an ideal world, a design specification would detail all the neces-
sary requirements of the design under test (DUT). Yet, even if this were A design requirement is information about the functional operation of
the case, it would most probably still not clarify everything that is needed a design. It is something about the design that needs to be implemented
for a design team to implement the requirements, and for the verification by the designers before they will be confident in saying that the design
team to verify adherence to them. For instance, the design specification capture is complete. It is a subset, a microcosm, of the main specification.
might give general information about an interface, but a second protocol A verification requirement is the systematic set of generation stimulus,
specification might be needed to solidify the necessary details of that correctness checks, and coverage points corresponding to a specific
requirement. Even with multiple documents to reference, it is quite often design requirement. It is something about the design that both designers
still unclear and ambiguous. Typically, the necessary information is and verification engineers want to stimulate, check and cover before
in someone’s head and a series of discussions are needed to refine they will be confident in saying that the design has been verified.
the requirement for actual use. Often one big ambiguous requirement
The proposed path from design requirements to tangible verification
is broken up into several refined requirements. On top of this, some
requirements and ultimately actual verification code typically takes
requirements are not evident upfront at all, and they tend to develop
3 steps. Each of these steps take a certain amount of discipline and
only as you progress along with the project. No one had thought of
expertise:
them until the actual implementation was underway. Clearly, it would be
beneficial to have some structured way to gather and maintain design
requirements.
1) EXTRACTING DESIGN REQUIREMENTS.
After a decade of using high level verification languages (System This is the hard part; it has historically proven problematic to get
Verilog, Vera, e, etc.) which enable coverage-driven constrained-random engineers to sit down and go over all the documents, and all the
49
requirements need to be refined so that they are clear, prioritized so that
it is known which ones are critical and which ones are just ‘nice to have’.
The resulting detailed design requirements are then taken into actual
RTL by the design team, and they are also used to strategically drive
the verification effort.
Ideally, higher ups would write a totally awesome design spec, and
then would create this new detailed design requirements spec/database.
Then designers would go off and implement RTL, and verification
engineers would translate the design requirements into verification
requirements and then implement them. But, the reality is that it takes
everyone to create and maintain a requirements database. The give and
take, the discussions, the arguing, the struggle is often a very beneficial
part of the database development. It brings out problems early, which
is what verification is all about. Taking the upfront time to create and
maintain a requirements document/database, is proving to produce big
payoffs for both the design and verification efforts. Design requirements
51
Editor: Tom Fitzpatrick
Program Manager: Rebecca Granquist
Senior Writer: Todd Burkholder
Subscribe: http://www.mentor.com/
horizons
52