Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Horizons June 08

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Partners’ Corner (pages 29-48)

Design Verification
Using Questa and the OVM,
A Practical Guide to OVM–Part 1
Dynamic Construction and
Configuration of Testbenches,
and OVM Productivity
A PUBLICATION OF MENTOR GRAPHICS JUNE 2008—VOLUME 4, ISSUE 2 using EZVerify

Welcome to the DAC ‘08


Verification Horizons!
Achieving DO -254 Design By Tom Fitzpatrick, Editor
Assurance using Advanced
and Verification Technologist
Verification Methods...page 7
Transitioning verification processes from
directed testing to constrained-random,
coverage-driven verification. more If last year’s DAC issue was “super-sized,” then I am pleased to welcome you to this year’s
super-duper-sized DAC issue of Verification Horizons.
Ending Endless Verification
with 0 -In Formal...page 13 As I write this note, I’ve just gotten home from coaching my son’s baseball team this evening.
How to apply 0-In ® formal tools to improve It was a great game, and there are a number of things I could write about it. But given that we
productivity. more achieved our first win of the season (after three losses), I find myself thinking about “the thrill
of victory.” My coaching philosophy is simply for the boys to do their best and have fun. My
Intelligent Testbench Automation
secondary philosophy is that it’s always more fun
Turbo-Charges Simulation...page 16
when you win, so I always encourage the boys
How the new inFact™ iTBA tool can
turbo charge your regression farm. more
to work together, stay focused on the game, and “If you’re going to be
think about what they need to do on every play. at DAC, ...stop by the
Using Questa Multi-View Verifi-
cation Components and OVM for
In gathering my thoughts to write to you tonight, Mentor booth, and feel
I couldn’t help but reflect back a few months to
AXI Verification...page 19: free to ask for me if you’d
Dr. Wally Rhines’ keynote address at DVCon,
Single model support for complete verification entitled “Ending Endless Verification.” The like to say “hi.” We’d love
at all levels: system, transaction, & RTL. more keynote has served as a “game plan” for us in the opportunity to talk to you
What is a Standard?...page 23 Design Verification Technology here at Mentor, about how the Mentor team
Is the OVM a de facto standard..? more and it should come as no surprise that this issue can help you experience
of Verification Horizons highlights each of the
Firmware Driven OVM Testbench... featured technologies discussed by Dr. Rhines,
“the thrill of victory”
page 26
including several articles that highlight the Open on your next project.”
Seamless ® OVM-compliant interfaces add Verification Methodology (OVM). —Tom Fitzpatrick
firmware execution (and debug). more
Our feature article, “Achieving DO-254 Design
Tribal Knowledge: Requirements- Assurance Using Advanced Verification Methods,”
Centric Verification...page 49: tells how Mentor and our friends at XtremeEDA teamed up to help Rockwell Collins transition
Towards a structured way to gather and their verification processes from “traditional” directed testing to today’s constrained-random,
maintain design requirements. more coverage-driven verification provided by Questa® and the Advanced Verification Methodology
(AVM). The article illustrates the productivity advantages of using the AVM to develop the
environment, with DO-254 compliance achieved through the superior ability of Questa
Verification Management to track the coverage results of the random OVM is open-source, the TSC has full access to all of the OVM,
environment and, most importantly, correlate those results back to and we look forward to working with the TSC to solve the interoper-
the original design requirements. While not every project requires ability problem for the industry. Our first OVM article addresses the
DO-254 compliance, the ability to track multiple types of coverage question of a standard methodology by making the case that the OVM,
and correlate them back to the requirements is a productivity gain that because it is open-source, is ideally suited, and well on its way, to
every project can realize. Everything in this article is directly applicable becoming a de facto standard. The success of the OVM just reinforces
to the OVM as well. that feeling of “the thrill of victory.”
The next three articles explain how we’re delivering on some The next few articles reinforce this point by showing how much
of the issues raised in Dr. Rhines’ keynote. Harry Foster leads off the OVM ecosystem is growing, both within Mentor and throughout
with a discussion of how to apply our 0-In® formal tools to improve the industry. We begin with a discussion of how Mentor’s Seamless®
productivity. As you’ll see, the real advantage comes from achieving HW/SW co-simulation tool has added OVM-compliant interfaces
more verification per cycle, rather than just adding more cycles of to add firmware execution (and debug) as one of the stimulus
verification. This productivity discussion is continued in our next generation options for an OVM-based verification environment. To
article, which explains how our new inFact™ iTBA tool can turbo further illustrate the growth of the OVM ecosystem, we next have an
charge your regression farm by automatically distributing non- expanded “Partners’ Corner” to highlight the activities of some of our
redundant, rule-based, targeted simulations across multiple CPUs Questa Vanguard Partners.
— another way of getting more verification per cycle. The third article We begin with an engineer’s point-of-view on using the OVM with
in this arc introduces our new Multi-View Verification Components Questa. Our friends at PSI Electronics were quickly able to adopt the
(MVCs). An MVC is a single model that supports complete verification OVM to replace and improve on their previous methodology. Our next
at the system, transaction, and register-transfer levels. The article two articles are by two of our valued training partners. Doulos begins
explains how our AXI MVC can be used as part of a constrained- a series of articles that will span the next several issues of Verification
random OVM environment to verify an AMBA AXI system. Taken Horizons to provide a Practical Guide to the OVM. The first installment
together, the combination of formal methods, intelligent testbench of the series is a survey of what’s in the OVM release and a look at
automation, and multi-abstraction models provide the keys to ending some of the basic concepts. Be sure to check back in the next issue
endless verification. The OVM and Questa provide the foundation to for more valuable insight. After that, Willamette HDL shows how the
allow them to work together. OVM’s dynamic construction and configuration mechanisms can be
The next set of articles provides some wonderful insights into used to foster reuse of verification components in different contexts.
the extraordinary growth of the OVM since its initial release back Then our friends at VeriEZ explain the static analysis capabilities of
in January. As a member of the OVM development team from its their EZVerify tool, which can automatically examine your OVM code
inception, I must confess to a certain level of pride at how well the to make sure it follows the proper guidelines.
OVM has been received — a feeling not unlike being the coach of a Last, but not least, we close this issue with our Consultant’s Corner
winning baseball team. “Tribal Knowledge” article. Many of you may be familiar with Peet
With the recent formation of the Accellera Verification Intellectual James from his years at Qualis, during which he published a number
Property Technical Subcommittee (VIP-TSC), there has been quite a of papers and articles on verification. We’re pleased to welcome
bit of talk about the industry finally getting to the point of standardizing him to the Mentor family and the growing list of Verification Horizons
on a verification methodology. As Mentor’s representative to contributors with his article on Requirements-Centric Verification.
the VIP-TSC, I’d like to take this opportunity to point out that the
TSC is currently chartered only to define interoperability between
methodologies; it is not to develop a new methodology (nor is its
purpose to pick an existing one on which to standardize). Since the

2
So, getting back to baseball for a moment, I’m now faced with the Verification Horizons
challenge of helping the boys continue their winning ways. Granted, is a publication of Mentor Graphics
this is a much better challenge to have than to convince a losing team Corporation, all rights reserved.
of 8–10 year-olds that they’ll win eventually, but it’s still a challenge. I’m Editor:
hoping that the sweet taste of victory will motivate them to work even Tom Fitzpatrick
harder, but the emphasis will always be on doing their best and having
Program Manager:
fun.
Rebecca Granquist
If you’re going to be at DAC, I’d gently suggest the same advice.
Senior Writer:
Stop by the Mentor booth, and feel free to ask for me if you’d like to
Todd Burkholder
say “hi.” We’d love the opportunity to talk to you about how the Mentor
team can help you experience “the thrill of victory” on your next
project. Wilsonville Worldwide Headquarters
8005 SW Boeckman Rd.
Wilsonville, OR 97070-7777

Respectfully submitted, Phone: 503-685-7000

Tom Fitzpatrick To subscribe visit:


http://www.mentor.com/products
Verification Technologist
/fv/verification_news.cfm

3
Table of Contents

Page 7...
Achieving DO-254 Design Assurance using
Advanced Verification Methods by Dr. Paul Marriott,
Director of Verification, XtremeEDA Corporation; David Landoll, Mentor Graphics;
Dustin Johnson, Electrical Engineer, Rockwell Collins; Darron May, Product Manager,
Mentor Graphics.

Page 13...
Ending Endless Verification with 0-In Formal
by Harry Foster, Chief Verification Scientist, Mentor Graphics

Page 16...
Intelligent Testbench Automation
Turbo-Charges Simulation
by Mark Olen, Product Manager, Verification, Mentor Graphics

Page 19...
Using Questa Multi-View Verification
Components and OVM for AXI Verification
by Ashish Kumar, Verification Division, Mentor Graphics

Page 23...
What is a Standard?
by Mark Glasser, Mentor Graphics Corporation & Mark Burton, GreenSoCs

Page 26...
Firmware Driven OVM Testbench
by Jim Kenney – SoC Verification Product Manager, Mentor Graphics

4
Partners’ Corner:

Page 29...
Design Verification Using Questa
and the Open Verification Methodology:
A PSI Engineer’s Point of View
by Jean-Sébastien Leroy, Design Verification Engineer &
Eric Louveau, Design Verification Manager, PSI Electronics

Page 35...
A Practical Guide to OVM – Part 1
by John Aynsley, CTO, Doulos

Page 40...
Dynamic Construction and
Configuration of Testbenches
by Mike Baird, Willamette HDL, Inc.

Page 44...
OVM Productivity using EZVerify
by Sashi Obilisetty, VeriEZ

Page 49...
Tribal Knowledge: Requirements-Centric
Verification
by Peet James, Sr. Verification Consultant, Mentor Graphics

5
Mentor and our friends
at XtremeEDA teamed up
to help Rockwell Collins
transition their verification
processes from “traditional”
directed testing to today’s
constrained-random,
coverage-driven verification
provided by Questa® and
the Advanced Verification
Methodology (AVM).

6
Achieving DO-254 Design Assurance using Advanced Verification
Methods by Dr. Paul Marriott, Director of Verification, XtremeEDA Corporation; David Landoll, Mentor Graphics;
Dustin Johnson, Electrical Engineer, Rockwell Collins; Darron May, Product Manager, Mentor Graphics

DO-254 is a standard enforced by the FAA that requires certification XtremeEDA was contracted by Mentor Graphics to train the Rockwell
of avionics suppliers’ designs and design processes to ensure reliability Collins team in the use of the AVM, and design an environment and
of airborne systems. Rockwell Collins had been using a traditional process whereby the linking of the functional coverage data to the
directed-testing approach to achieve DO-254 compliance [1]. This is formal requirements could be automated.
time consuming and, as their designs were becoming more complex,
they wanted to take advantage of the productivity gains a modern
constrained-random coverage-driven environment provides, but still ORIGINAL METHODOLOGY
ensure the needs of the DO-254 process could be met. Historically, Rockwell Collins (RC) utilized a thorough DO- 254
compliant methodology of creating and running a large suite of self-
A methodology built using SystemVerilog with Mentor’s Advanced
checking directed tests, as well as performing manual and automatic
Verification Methodology (AVM) [4] was assembled and a flow developed
analysis of the code (and any other methods as required by a particular
that linked the formal requirements documents through the verification
program). However, although this methodology is thorough and
management tools in Questa. This allowed coverage data to be used
produces a high quality design, some bugs may not be found until late in
to satisfy the DO-254 process, and greatly reduced the verification
the design cycle, or during lab bring-up, which makes identifying, fixing,
effort required. The target design, an FPGA DMA engine, was recently
and re-verifying them more costly.
certified using the flow developed to meet DO-254 Level A compliance,
the highest level that is used in mission-critical systems.

The process flow developed for this project will be of interest to anyone
PROJECT OVERVIEW
who needs high levels of verification confidence with the productivity This project, an FPGA DMA Engine, had increased complexity
gains that methodologies such as the AVM or OVM provide. due to high configurability, potentially exacerbating this situation. RC
undertook the use of newer advanced verification methodologies in
hope of achieving increased robustness, finding bugs earlier in the
INTRODUCTION project cycle, and verifying the overall device more quickly. As a result,
DO-254 is a process which, simply stated, ensures that all RC hoped to shorten the overall verification cycle and achieve better
specified design requirements have been verified in a repeatable and quality results.
demonstrable way. The key aspect
of this is that all the requirements
of the system are well specified,
and each of those requirements
can be demonstrated to have been
verified. Traditionally, a direct-
testing approach is used as a test
can be written for each requirement.
Conceptually, a directed test
essentially encapsulates one,
or possibly several, functional
coverage points [2]. Hence it is
the linking of the functional coverage points to the design requirements
Figure 1 shows the DMA Engine’s initial configuration set-up process,
which is really being used to verify those requirements.
where the uP writes a list of DMA records to the Xfer Memory. These
records describe repeatable transfers to be executed by the DMA

7
Engine once started. The exact set-up of these records is target device PRODUCTIVITY ENABLERS
specific, thus the DMA Engine is highly configurable via these records.
Constrained Random Generation
Once this set-up is complete, the DMA Engine is started and the data
transfers begin (also shown in Figure 1, right-hand side). During regular The term “random” can create confusion in the world of DO-254
data transfers, data moves between the Ethernet chip and the uP’s local compliance as 100% certainty is not usually associated with a random
RAM as well as between the local RAM and the uP. process. However, random in the context of verification means selecting
values from a valid range, which is set by constraints. These constraints
There are several complexities with this design that increase are determined by the design requirements, and the randomization
the verification challenge. First, there are two types of data, namely within these constraints is to allow a more rapid exploration of the set
Samples and Messages. The Messages queue up and must all be of valid stimulus that is necessary to assure design compliance. The
delivered. The Samples are from measurement devices that are constraints can be visualized as a tree-like structure which, if every
infrequently updated compared to the speed of the uP. As such, it branch is explored, encapsulates all possible stimuli that can be applied
makes no sense to read from the sample ports faster than external to the design.
measurements can change, as this would re-read the same data
value over and over. As such, there are sampling clocks that
indicate the correct time to sample these ports.

Many data transfers require a three-step process of first locking


the port associated with the data transfer, then transferring the
data and finally closing the port. Hence a single transfer can
require numerous reads/writes. In addition, this behavior is highly
configurable via the DMA records previously discussed. All these
possible record types and combinations must be tested.

The combinations and permutations of these transfers and


configurations makes the verification of this device challenging,
and would require a large number of directed tests using the
traditional methodology.
The branches are selected at random, but the specification of the
branches themselves comes from the formal requirements. For a given
METHODOLOGY CRITERIA
seed, the process is entirely repeatable and this is an essential aspect
The following criteria were used to select a better methodology: of achieving compliance.
1. It must pass the DO-254 compliance review Coverage
2. It must use industry standard tools and languages Completeness is a necessary criterion for compliance. In directed-
3. It must interface with current and future designs testing, completeness is assured by running all tests. In a constrained-
4. It should support new hardware HVL constructs random environment, an automated means of measuring completeness
that enable productivity gains is required. This is provided by coverage. Different kinds of coverage
metrics can be collected and it is their aggregate which is used to
It was soon determined that SystemVerilog together with Mentor’s determine completeness. This approach is referred to as the Total
AVM could potentially satisfy all the above criteria if some effort was put Coverage Model and is the process of aggregating the metrics from all
into developing a process flow with DO-254 compliance as the foremost tests and simulation runs into a composite dataset. A key point is that the
goal. design requirements can be mapped into coverage points which can be
added by the verification engineers. With this linkage of coverage items
to requirements in place, the coverage points provide clear confirmation
that requirements have been met.

8
Design Intent Specification There are essentially three steps to achieve requirements linking:
Design Intent Specification refers to the ability to document the
1. Exporting the testplan from DOORS
expected design protocols in a tool readable format. For SystemVerilog,
that format is called SystemVerilog Assertions (SVA). This feature of 2. Linking the UCDB testplan to the coverage items
the SystemVerilog language allows the user to describe design signal
activity over time so that it is constantly being checked during simulation 3. Merging the simulation results with the testplan
[3]. These time-based signal behaviors are also known as temporal
With this methodology there is not necessarily a one-to-one
assertions.
relationship between the requirements and testcases. There is however
While functional coverage captures and checks the design state at a a strict auditable relationship between requirements and their assertions
moment in time, temporal assertions are used to check and cover design or coverage points. This provides the important traceability link that
activity over time. Requirements often describe such design intent, or shows a requirement was covered (i.e. verified).
in other words, how the design should react to stimulus. Requirements
of this type translate well into temporal assertions. Examples of these
types of assertions include target protocol sequences, error recovery, EXAMPLE PROCESS
or sampling of design state.
The following are DOORS screen shots demonstrating the steps
required to meet supported requirements traceability. They illustrate
TRACEABILITY the process of connecting requirements to tests, coverage items, and
assertions.
Simply put, DO-254 compliance means that the verification team
can demonstrate that they did, indeed, thoroughly verify all the design The basic process is that requirements and test cases are entered
requirements and can repeat that effort if required to do so. The key to in and managed within DOORs. DOORs can produce a testplan report
achieving this is traceability – i.e. demonstrating how each requirement that is the basis for the Questa testplan. The verification engineers fill in
was verified. Since coverage is being used to provide the feedback the testplan with the items that verify requirements have been covered.
that the requirements have been verified, an automated process is Verification is run and the coverage data is stored and then merged with
essential to ensure productivity and repeatability and hence to provide the testplan. The final result is that the testplan contains coverage data
an automated traceability path from each documented requirement to that links to the requirements being tested.
the coverage item or assertion that verified it.

The Verification Management environment (hereinafter referred to as


VM) in Questa is a new feature that allows a “testplan” (or
verification plan – the terms are interchangeable) to be
linked to coverage data captured during simulation. In a
DO-254 compliant methodology, a requirements capture
tool is typically used to store the design specification
and the testplan. The requirements management tool
DOORS (from the company Telelogic) was used on
this project. Links were created between the testplan
maintained in DOORS, the VM testplan stored in the
Unified Coverage Database (UCDB) and the results
of actual simulation runs that were also stored in the
UCDB. Merging the testplan and simulation run UCDBs
provided feedback as to the current status of the overall
compliance and verification effort. The flow is depicted
in Figure 3.

9
RESULTS
Preliminary analysis shows that the project
team is happy with the decision to use these
advanced verification methods [5]. They were
able to adopt these new methods, as well as the
SystemVerilog language with no real schedule
penalties. The overall environment was up
and running quickly. The verification team was
ready to write tests before RTL design code
was available, so the verification team wrote
transaction level models for the device and
used these for early testbench development and
testing. Later, after the design was available,
these transaction level models became the
Figure 4 (on the following page) shows the testplan entered into reference assuring design correctness.
DOORS (note: for readability, the testcase description/detail text is
omitted from these screenshots). It highlights a test case, the covergroup Through this process, the project team found bugs that, in their
names that it is linked to, and the type of coverage the link refers to. In words, “would have been tough to catch without randoms.” For example,
this figure, the in-links column shows the text from the object that is a “FIFO full” problem was detected early in the project that required a
linked into that cell. In the case of 3.1.1.2.1, various objects that are complex scenario of reads and writes while the rest of the design was
defined elsewhere in the document are linked into that particular cell. in a specific state.

Figure 5 shows the hierarchical structure of the DOORS testplan


following the Test Cases section. The orange and red triangles in the
right column represent links – red for out
links (in this case to the requirements
document) and orange for in-links (in
this case coming from the coverage
objects).

Figure 6 (on the opposite page)


shows the coverage object definitions
(in this case, just the SystemVerilog
covergroups are highlighted) and the
right hand column shows that individual
items in the covergroup definitions (such
as coverpoints or cross-products) can
be linked. The red triangles show these
are out-links – which, in this case, are
linked into the testcases shown in the
previous diagram.

10
In addition, the team reported that if they
had attempted the traditional approach
of directed tests, they would have faced
writing and maintaining hundreds of
individual tests. This would have become
a difficult revision control management
problem, as each test needs to be
tracked back to individual requirements.
In addition, this would have also become
a challenging regression environment,
with long run-times, difficult job spawning
coordination and cumbersome coverage
results merging.

The team also reported more thorough


and efficient testing earlier in the project
compared to their traditional methodology.
In general, the same results are being
checked in this methodology vs. their at hand and create a workable and efficient environment. To visualize
traditional directed test methodology. However, through the use of this, refer to Figure 2 and imagine the tree structure being broken
assertions, RC can now achieve much higher leverage from the creation down into 6 different trees, each characterizing a different part of the
of any given check. Previously, checks were written as part of a directed required behavior. This makes the creation of the tree structure easier
test, and were typically associated with the functionality of that directed to comprehend and maintain. This does have the caveat of limiting
test. Now, once a check is created, it can be continuously run across some testing across the tests, as not all randomizations can take place
all simulations as part of the verification environment rather than being simultaneously. However, the impact of this tradeoff can be mitigated
attached to specific directed tests. by intelligently partitioning these tests such that important input
concurrencies are contained within the same test.
Through the use of assertions, they “are checking hundreds of things
each clock cycle” across all tests, verses a directed test that would The team also initially struggled to understand if the new testbench
perform these same checks only once typically at the end of a test. was operating as desired. They solved this issue by leveraging the
As such, the design is now more thoroughly checked all the time via embedded functional coverage points. Analyzing coverage results post-
assertions across all simulations. simulation provided the needed visibility.
While the team was creating the requirements-based assertions and As expected, the team did need to write some directed tests for
coverage points, they found that some requirements had to be clarified in conditions which are resistant to random stimulus.
order to write assertions. The team found it helped to write requirements
up-front such that they map well to assertions.
CONCLUSION
In general, the team recommended that the requirements be reviewed
with a mapping to assertions in mind during the DO-254 Requirement The methodology described here allows an advanced verification
Review process/cycle. methodology consisting of Constrained Random, Design Intent
Specification, and the Total Coverage Model (unified coverage database)
Initially the team struggled with “what to randomize” for constrained to satisfy the DO-254 process requirements: namely repeatability and
random testing. They found a balance between complete-randomization demonstration of requirement satisfaction.
versus highly-constrained by splitting the verification into six different
random tests. Each test randomizes a different aspect of the design The use of DOORS as a tool to represent the requirements and test
behavior. This more easily allowed the team to understand the task plan documents, and linking this into the features of Questa means that
requirements traceability can be automated. Integrating the testplan
11
into the Questa Verification Management tool closes the feedback loop, REFERENCES:
allows the tracking of functional coverage results to the requirements
listed in the testplan, and indicates when full coverage is achieved. [1] Henry Angulo, Asad Khan and Scott Morrison, “Rigorous
The ultimate verification goal of a DO-254 based approach is to Automated Verification Yields High Quality Silicon”, EDA
demonstrate that the requirements have been satisfied with the highest DesignLine, April 24, 2007, paragraph 4
possible confidence. The approach described here enables this goal http://www.edadesignline.com/199200993
on larger, more complex designs with increased robustness, earlier [2] P. Marriott, S. Bailey, “Functional Coverage Using SystemVerilog”,
identification of bugs, and an overall improvement in efficiency and in Proc. Of DVCon 2006, San Jose, CA, USA, February 2006
effectiveness.–– [3] Tom Fitzpatrick, “SystemVerilog Assertions Unify Design and
Verification”, EETimes October 17, 2003. http://www.eetimes.com/
XtremeEDA Corporation (http://www.xtreme-eda.com), the leading news/design/showArticle.jhtml?articleID=16502229
ASIC and FPGA professional services company, has the key know-how [4] Mark Glasser, ed., Advanced Verification Methodology
required to address the challenges of applying leading edge verification Cookbook, Rev. 3.0, Mentor Graphics 2007
methodologies to help customers deliver their projects in accordance http://www.mentor.com/products/fv/_3b715c/cb_rf.cfm
with the functional and performance attributes specified, to the highest [5] J. Keithan, D. Landoll, P. Marriott & W.S. Logan. The Use
standards possible. of Advanced Verification Methods to Address DO-254 Design
Assurance. Presented at the IEEE Aerospace Conference,
Rockwell Collins (NYSE: COL) is a pioneer in the development
March 6th 2008, Bozemann Montana.
and deployment of innovative communication and aviation electronic
solutions for both commercial and government applications.

12
Ending Endless Verification with 0-In Formal
by Harry Foster, Chief Verification Scientist, Mentor Graphics

Dr. Wally Rhines noted during his DVCon 2008 keynote speech that An example of such sequential behavior is an MPEG encoder block
today’s approach to verification is a frustrating, open-loop process that that encodes a stream of video data. Functional formal verification
often does not end—even after the integrated circuit ships. To keep usually faces state explosion for sequential designs because, in general,
pace with Moore’s law, which has enabled escalating product feature most interesting properties involve a majority of the design inputs and
demands, verification efficiencies must increase by at least 100x. state elements within the design block.
Obviously, throwing more engineers and computers at the problem has
Concurrent designs blocks (see Figure 2) deal with multiple input data
not provided a scalable solution. The industry must move away from
streams that potentially collide with each other.
the model that adds more cycles of verification, to a model that adds
more verification per cycles (that is, maximizing the meaningful cycles
per second). Functional formal verification (such as Mentor Graphics’
0-In™ Formal Verification tool), when effectively used, offers significant
improvements in verification productivity. The confusion most engineers
face when considering functional formal verification is in understanding
how to effectively get started.

Traditionally, applying formal property checking has been viewed


as an orthogonal process to simulation-based approaches. However,
my philosophy is that the two processes are actually complementary.
Figure 2. Concurrent paths
The key to successfully integrating formal into a verification flow is first
understanding the where, when, and how to apply it.
An example is a bus bridge or protocol data link layer block, which
essentially transports data packets unchanged from multiple input
WHERE TO APPLY FUNCTIONAL FORMAL sources to multiple output sources.
We begin by understanding where to and where not to apply functional
formal verification. In general, design blocks can be classified as either Control vs. data transform vs. data transport blocks. Given the
sequential or concurrent, control or datapath, and data transform or data discussion above, the following coarse characterization often helps in
transport. Based on their classification, we can identify types of design determining whether formal is suitable. Design blocks can usually be
blocks amenable to formal and types of design blocks that are not. characterized as control- or datapath-oriented. However, we can further
characterize datapath design blocks as either data transform (as shown
Sequential vs. concurrent designs. A key criterion for choosing design in Figure 1), or data transport (as shown in Figure 2). Control-oriented
blocks suitable for formal is whether the block is mostly sequential (that blocks are excellent candidates for formal verification. However, when
is, non-concurrent) or mostly concurrent. Sequential design blocks (see dealing with datapath blocks—there is often considerable confusion
Figure 1) typically operate on a single stream of input data, even though about what works and what does not work. I recommend focusing on
there may be multiple packets at various stages of the design pipeline blocks that involve data transportation and choosing an alternative
at any instant. verification strategy for blocks involving data transformation.

WHEN TO APPLY FUNCTIONAL FORMAL


I have often said, “just because you can do something, doesn’t
mean you should.” And this philosophy certainly extends to planning a
project’s functional verification strategy. There are many variables that
must be considered to ensure achieving maximum productivity, such
13
as the current project team skill set, level of difficulty for applying a At the next level of maturity (labeled Defined in Figure 3), processes
process on a particular design, concern and criticality, schedule, and and methodology are well defined and documented. This level is marked
cost. Let’s examine a few of these variables to help us determine when by greater commitment and involvement of management in planning, as
it makes sense to apply functional formal verification, and we will begin well as in defining methods for analyzing risk. At this level of maturity,
by examining project team skill sets. test plans are systematically and comprehensively defined with clear
metrics to ensure that all functionality is verified. ABV is often widely
Can an organization that lacks sufficient skills and experience employed. Simple formal verification (for example, proving assertions
produce a successful object-oriented constrained-random coverage- and clock-domain checking) is generally adopted at this level.
driven testbench. . . and do so in a repeatable way? Probably not until
sufficient skills are developed within the organization. Similarly, can an At level 4 (Managed), an organization establishes and statistically
organization that lacks sufficient skills formally prove a complex block, analyzes quality and process performance objectives. Emphasis is
such as a cache coherent controller? The answer again is: probably not placed on defining, measuring, and then analyzing metrics, such as
until they acquire sufficient skills—which they certainly can do. assertion density, functional coverage, structural coverage, code
coverage, bug curves, and so forth. Advanced functional formal
What differentiates a successful team from an unsuccessful team is verification is applied to areas of concern or low simulation coverage,
process and adoption of new methodologies and verification methods. which means the organization has developed sufficient skills within
Unsuccessful teams tend to approach verification in an ad hoc fashion a project team to address design complexity (such as introducing
while successful teams employ a more mature level of methodology that abstractions in the property set or design under verification).
is both methodical and systematic.
At the final level of maturity, an organization allows continuous
To assist an organization in assessing its own maturity level, a improvement. In general, the organization has achieved predictable
maturity model has been created (various levels of maturity are depicted results. The goal at this level is to maximize efficiency of processes with
in Figure 3). Organizations that wish to reap verification productivity minimal resources.
benefits need to migrate away from ad hoc processes to a higher level
of development capability. Having defined these levels of maturity, we now map various complex
design blocks into the required organization skills necessary to achieve
a 100% proof for all properties defined for a given block (see Figure 4).

Figure 3. Process maturity levels

For example, a level 2 organization begins to establish methodology


to manage the development process. Improvements in consistency and
predictability can be gained at this level. A shift begins from human-
driven unpredictable results to a first-level managed repeatable process.
Assertion-based verification (ABV) techniques are often introduced
at this level to improve simulation debugging productivity. Obviously,
learning to write assertions is a fundamental first step when considering
the integration of formal into a functional verification flow.
Figure 4. Formal verification skills vs. difficulty

14
So what is the key take-away from Figure 4? If you are an organization must be error free. These properties warrant applying the appropriate
at maturity level 1 or 2 with no prior experience in formal verification, resources to achieve a full proof.
you will have success with formal with minimal advance skills required
Bug-hunting. Using formal verification is not limited to full proofs.
if you select blocks labeled as level 3 in Figure 4. This choice will allow
In fact, you can effectively use formal verification as a bug-hunting
your organization to develop advanced skills so that in the future you
technique, often uncovering complex corner-cases missed by simulation.
can prove more complex blocks (for example, those labeled as level 4
The two main bug-hunting techniques are bounded model checking,
in Figure 4).
where we prove that a set of assertions is safe out to some bounded
With this said, it is important to point out that an organization’s goal sequential depth; and dynamic formal, which combines simulation and
might be to eliminate as many bugs as possible prior to integrating formal verification to reach deep complex states.
various blocks into the system- or chip-level simulation environment.
Interface formalization. The goal of interface formalization is to harden
For these situations, formal is used not to prove a block, but to flush out
your design’s interface implementation using formal verification prior
bugs. This use of formal only requires level 3 skills, and can improve
to integrating blocks into the system simulation environment. In other
productivity by eliminating bugs sooner. Hence, to successfully apply
words, your focus is purely on the design’s interface (versus a focus on
formal, you must consider the existing skill set of your organization.
internal assertions or block-level, end-to-end properties). The benefit of
Obviously, it should be a goal to continue to mature an organization’s
interface-formalization is that you can reuse your interface assertions
process capabilities in order to improve productivity.
and assumptions during system-level simulation, dramatically reducing
integration debugging time.
HOW TO APPLY FUNCTIONAL FORMAL Improved coverage. Creating a high-fidelity coverage model can be
There are different degrees (or strategies) for which formal verification a challenge in a traditional simulation environment. If a corner-case or
can be successfully applied to your designs—ranging from improving complex behavior is missing from the coverage model, then it is likely
coverage, formalizing interfaces, bug-hunting, or full-proof. However, that behaviors of the design will go untested. However, dynamic formal
prior to choosing a strategy, I recommend that you first classify the key is an excellent way to leverage an existing coverage model to explore
properties identified in your verification plan. complex behaviors around interesting coverage points. The overall
benefits are improved coverage and the ability to find bugs that are
To help order your list of properties, answer the following questions:
more complex.
1. Did a respin occur on a previous project for a similar property?
(high ROI)
SUMMARY
2. Is the verification team concerned about achieving high To achieve 100x improvement in verification productivity, the industry
coverage in simulation for a particular property? (high ROI) must move away from a model that adds more cycles of verification, to
3. Is the property control-intensive? (high likelihood of success) a model that adds more verification per cycles (that is, maximizing the
meaningful cycles per second). Functional formal verification (such as
4. Is there sufficient access to the design team to help define Mentor Graphics’ 0-In™ Formal Verification tool) is one technology that,
constraints for a particular property? (high likelihood of success) when used effectively, can offer significant improvements in verification
productivity. The key to success requires understanding what, when,
After ordering your list, assign an appropriate strategy for each
and how to effectively apply formal verification.
property in the list based on your project’s schedule and resource
constraints. Your verification goals, project schedule, and resource
constraints influence the strategy you select. I recommend you choose
from one of the following four strategies.

Full proof. Projects often have many properties in the list that are
of critical importance and concern. For example, to ensure that the
design is not dead in the lab, there are certain properties that absolutely

15
Intelligent Testbench Automation Turbo-Charges Simulation
by Mark Olen, Product Manager, Verification, Mentor Graphics

Several years ago advances in testbench automation enabled when algebraically constrained. There’s actually no reason for inter-
verification engineers to test more functionality by dramatically CPU communication during simulation, because randomized sequence
increasing the quantity of testbench sequences for simulation. Through generation has no knowledge of what has been previously simulated on
clever randomization techniques, constrained by algebraic expressions, its own CPU. Therefore, inter-CPU communication is moot. Hundreds
verification teams were able to create testbench programs that generated (or thousands) of simulations are left to run overnight, over weekends,
many times more sequences than directed testbench programs could. or even over weeks, generating redundant sequences at a rate that
While actual testbench programming time savings and simulation increases every minute.
efficiency were hotly debated topics, few argued that constrained
The impact on debugging is equally devastating. Imagine returning
random test generation did not generate orders of magnitude more
to the office on Monday morning to view the results of the preceding
testbench sequences.
weekend’s simulation run. Suppose that one CPU in a simulation farm
However, addressing the testbench sequence generation of 100 or more CPUs failed after 36 hours of simulation. What is the
challenge through quantitative means (i.e. “more” sequences) caused next step? How does one debug a 36-hour long simulation of random
corresponding challenges during the simulation and debug phases of tests? The current “state-of-the-art” technique is to write a script file that
functional verification. Even when constrained by algebraic expressions, terminates each simulation on each CPU, and logs the results, every
random techniques tend to increasingly generate redundant testbench hour or so. Then the script assigns yet another seed to each CPU, and
sequences during the course of simulation, thus more simulators are restarts the simulation for another hour. This reduces the amount of
required to run for longer periods of time to achieve verification goals. In simulation data faced by the verification engineer, to an hour or so.
addition, even when using “directed” constrained random techniques, it is
Previous articles in Verification Horizons introduced a recent
difficult to pre-condition testbenches to “target” interesting functionality
breakthrough in functional verification called intelligent testbench
early in the simulation. The mathematical characteristics of constrained
automation. These articles describe how one of Mentor Graphics’
random testing that enable the generation of sequences the verification
newest product lines called inFact™ iTBA achieves superior functional
engineer “hadn’t thought of”, are the very same characteristics that
coverage by algorithmically traversing multiple rule-graphs, and
make it difficult to control and direct the sequence generation process.
synthesizing testbench sequences on-the-fly during simulation. The
As a result, it is not uncommon to see “production” simulation farms rule-graphs are derived from interface descriptions, bus protocols, and
of over 100, or even 1000 simulators, running for several days, or even functional specifications. And while rule-graphs are considerably smaller
weeks. For a design of even moderate complexity, the theoretical than conventional testbenches, they allow large quantities of sequences
number of sequences can be staggering. Most verification engineers to be generated. However, unlike traditional constrained random test
are well aware of the infamous testbench that could consume every techniques, rule-graphs enable non-redundant sequence generation,
computer on Earth, run for millions of years, and still not finish. eliminating significant waste of simulation time and resources.

But while the focus today on testbench automation is all about The latest advances in intelligent testbench automation now
languages, the impacts of testbench automation on simulation farms enable large simulations to be automatically distributed across up to
and the debugging process are conveniently overlooked. While it is 1000 CPUs, extending non-redundant sequence generation to entire
wasteful for a testbench toolset to generate redundant tests for a simulation server farms. This massive gain in efficiency is attributable
single simulator on a single CPU, imagine how much waste occurs in to the inherent architecture of rule-graphs and new advanced traversal
a simulation farm with hundreds of CPUs simulating in parallel, with algorithms tuned for production applications. Rule-graphs contain a
each CPU having no knowledge of what the others are doing. Today’s highly compressed description of a simulation “grammar” and “syntax”.
“state-of-the-art” technique is to assign a different seed to each CPU, to Traversal algorithms stitch these together into “sentences”, complete
begin random sequence generation from different starting points. But with stimulus and expects, in real-time for the simulator. Multiple rule-
once started, randomized sequence generation runs open-loop, even graphs may be instantiated during simulation, enabling generation of

16
sentences that create interesting system-level functional verification precisely ten thousand sequences. The spatial distribution algorithm
scenarios including cross-product scenarios, corner-case scenarios, prevents repetition of sequences on any given simulation CPU, and the
and more. module-N distribution algorithm prevents repetition of sequences across
the entire simulation farm. In addition, by performing the entire
simulation without unfolding the rule-graphs, the efficiency loss
due to overhead is less than 1%. The same simulation of one
million sequences that requires one thousand hours of run-time
on a single CPU can be completed in just minutes longer than
ten hours on a simulation farm of one hundred similar simulation
CPUs.

However, the real power of rule-graph accounting for


production simulation is not even realized in the above scenario,
as it assumes that all one hundred simulation CPUs are
equivalent, all are available for distribution 100% of the time,
and that all testbench sequences require the same amount of
simulation time. In a more realistic situation, not all CPUs in a
server farm may be equally configured, not all may be available
for the duration of a simulation run, and every testbench
sequence may take a different amount of time to simulate.
Some CPUs may have faster processors, faster internal busses,
and/or more robust memory configurations than others, and
some CPUs may not even become available until some time
during the simulation. In addition, not all testbench sequences
In the mathematical sense, each possible sentence can be counted, are created equally, as some can require ten times as long to simulate
analyzed, labeled, and categorized without unfolding or decompressing as others.
the rule-graph. Thus before a simulation is initiated inFact can quickly
To address each of these complexities, inFact’s Simulation Distribution
count the number of possible simulation sentences and report it to the
Manager™ can break up the universe of sequences into smaller virtual
verification engineer. This information is valuable to determine the
slices, and assign a new slice to each simulation CPU, when it has
size of the simulation before starting, ensuring that sufficient time and
completed the previous slice, or when it becomes available altogether.
resources are available. In addition, inFact can assign a virtual label to
This provides an automatic load-balancing effect, preventing the
each simulation sequence, providing an efficient accounting system for
situation where one simulation CPU finishes quickly, and waits idly while
distributing sequences across multiple simulation CPUs. The simplest
others finish. It also uses every available simulation resource efficiently,
of these algorithms is called the modulo-N distribution™ algorithm, that
including CPUs that become available after finishing other jobs.
assigns a slice of the universe of sequences to each CPU, where N is
determined by the number of CPUs in a simulation farm. In it’s simplest In one smaller design example, a peripheral controller IP core was
form, sequences 1, 101, 201, . . . through 999,991 are assigned to the verified with four rule-graphs that contained a universe of 210,000
first simulation CPU in a farm of one hundred. Sequences 2, 102, 202, combinations of testbench sequence sentences, which were calculated
. . . through 999,992 are assigned to the second simulation CPU in the prior to initiating the simulation run. The rule-graphs included one bus
farm. And accordingly, sequences 100, 200, 300, . . . through 1,000,000 protocol transactor, one standard peripheral master, one peripheral
are assigned to the last simulation CPU in the farm. Keep in mind that standard slave, and one executive level scenario generator derived
during simulation, each node also employs a spatial distribution™ from the design’s functional specification. Using a Questa™ AFV
algorithm, that distributes each subsequent traversal (and its resulting (Advanced Functional Verification) simulator and an inFact RTS (Run-
simulation sentence) broadly across its universe of sequences. As a Time System) on a single CPU, the simulation completed in just over 18
result, in this case, a simulation universe of one million sequences can hours. Distributing the same simulation across six CPUs with Questa/
be distributed across one hundred simulation CPUs, each executing inFact pairs completed in a few minutes over 3 hours – nearly linear
17
throughput enhancement, with less than 1%
overhead. And all 210,000 unique testbench
sequences were simulated without repetition.
Furthermore, after all 210,000 sequences
were simulated, the verification engineer still
had time available, so the selection of inFact’s
traversal algorithm was changed from spatial-
distribution to constrained-random, and the
six Questa/inFact pairs were allowed to run
random simulations for another 6 hours. As a
result, total simulation time was cut in half (18
hours reduced to 9 hours), while ensuring that
all 210,000 sequences were simulated at least
one time.

In another design example, a three-layer


bus fabric was verified with 21 instantiations
of four rule-graphs that contained a universe
of approximately 24,000,000 combinations of 12 CPUs with Questa/inFact pairs, which subsequently completed in 16
testbench sequence sentences. The rule-graphs included one bus hours – again nearly linear throughput enhancement with less than 1%
protocol master transactor instantiated three times, one bus protocol slave overhead. And again, all 24 million different sequences were simulated
transactor instantiated sixteen times, one bus protocol arbiter testbench just one time, by virtue of inFact’s spatial distribution algorithm.
instantiated one time, and one executive level scenario generator.
In summary, the inherent architecture of rule-graphs enables
The transactions were intentionally kept short for the purposes of this
staggering productivity enhancements in functional verification that
verification project, and thus required one CPU with a Questa/inFact
reach beyond verification engineering, and into production simulation.
pair only two hours to simulate the first 250,000 sequences. At that
In one respect a rule-graph is similar to a conventional testbench
point the simulation was terminated, and was then distributed across
program, in that it contains information about an interface description,
bus protocol, or functional specification. But that’s where the similarity
ends. A traditional testbench is large, where a rule-graph is small. And a
traditional testbench program executes during simulation, where a rule-
graph is executed “upon” during simulation, by an intelligent testbench
automation toolset like inFact. And inFact’s advanced algorithms
can not only ensure that no testbench sequences are replicated on a
simulation CPU, but they can also ensure that no testbench sequences
are replicated across an entire simulation farm.

inFact can also leverage rule-graphs to speed design debugging,


without slowing down simulation (or filling up memory) for traditional
save and restore functionality. Using additional advanced algorithms,
inFact can backtrack through a simulation from a point of DUT failure.
If a verification engineer tells inFact how far back in the simulation
to replay, inFact can regenerate the exact same sequences in order,
leading up to the DUT failure. In a future issue of Verification Horizons,
this technique will be discussed with more real-life examples.

18
Using Questa Multi-View Verification Components and OVM
for AXI Verification by Ashish Kumar, Verification Division, Mentor Graphics

On February 18, 2008, Mentor Graphics introduced a new generation Transaction-based verification methodologies, such as the OVM, use
of Verification IP called Multi-View Verification Components (MVC). TLM interfaces as the communication mechanism between verification
The MVC was created using Mentor’s unique Multi-View technology. components. By using the OVM’s layered approach, the testbench code
Each MVC component is a single model that supports the complete developed can be reused throughout the design cycle, lowering the
verification effort at the system, transaction, and register transfer levels. overall cost typically required by RTL-oriented methodologies.
The MVC supports automatic stimulus generation, reference checking,
Figure 1 shows a typical configuration of the AXI MVC, along with
and coverage measurements for popular protocols, such as AMBA™
some of the verification components delivered with the MVC (including
with AHB, APB/APB3, and AXI.
the SystemVerilog AXI interface, driver, responder, monitor, and tasks
This article highlights the ways to create a reusable SystemVerilog and for directed and constrained-random stimulus generation, as well as
OVM-based, constrained-random verification environment for AMBA3 coverage groups and configuration).
AXI using the AXI MVC. More detailed information can be found in the
MVC Databook. MVC enables fast test development for all aspects of
the AXI protocol and provides all the necessary SystemVerilog classes,
interfaces, and tasks required for both directed and constrained-random
testing, as required for AXI master and slave unit verification at the RTL
and TLM abstraction levels.

The AXI MVC includes:

• Complete AMBA AXI protocol verification at the RTL


and TLM, with stimulus generation, reference checking,
and functional coverage
• Complete support of AMBA AXI protocol specification
Rev 1.0 including: Figure 1
-Configurable bus widths (address, read data,
write data, transaction ID) Users of the OVM have reported a significant reduction of testbench
-Configurable number of concurrent burst transactions design—around 66 percent less testbench code—yet, the SystemVerilog-
-Out-of-order transaction completion based testbench provided 3-5 times more testing functionality than RTL-
-Narrow and unaligned transfers oriented testbenches based on proprietary languages such as vera or
-Illegal sequence detection e. Not only do OVM testbenches require less effort to develop, ongoing
-Atomic access checks support and maintenance costs are greatly reduced.
• Support for SystemVerilog, AVM, OVM, and TLM
• A verification plan that helps you reach 100 percent Figure 2 and figure 3 on the following page show some typical
AMBA AXI protocol coverage environments for master and slave unit verification enabled by the AXI
• Examples of typical protocol verification tasks MVC.
• Integrated transaction-based protocol debugging and analysis
• Integration with QVL AXI assertion monitors optimized
for formal verification

19
module env;

parameter ADDR_WIDTH = 32;


parameter RDATA_WIDTH = 32;
parameter WDATA_WIDTH = 32;
parameter ID_WIDTH = 2;

bit clk ;
bit reset_n ;

axi_vip_config #(ADDR_WIDTH, RDATA_WIDTH, WDATA_WIDTH, ID_WIDTH)


Figure 2. Master DUT implementation axi_config;

sample_environment #(ADDR_WIDTH, RDATA_WIDTH, WDATA_WIDTH,


ID_WIDTH)
env;

AXI #(ADDR_WIDTH, RDATA_WIDTH, WDATA_WIDTH, ID_WIDTH)


axi_if(clk, reset_n);

// User DUT
verilog_master_wrapper #(ADDR_WIDTH, RDATA_WIDTH, WDATA_WIDTH,
ID_WIDTH)
AXI_master (.iAXI_imaster_mp(axi_if.iAXI_if.master_mp));

qvl_monitor_wrapper #(ADDR_WIDTH, RDATA_WIDTH, WDATA_WIDTH,


ID_WIDTH)
QVL_formal_monitor (.iAXI_if (axi_if.iAXI_if));

Figure 3. Slave DUT implementation initial


begin
bit okay = 1’b0;
axi_config = new ();
// Master is TLM, slave is RTL
TESTBENCH EXAMPLE axi_config.m_master_map = RTL;
axi_config.m_slave_map = TLM;
The following example represents a complete constrained-random // VIP generates clock and reset
AXI verification environment, where an RTL Verilog AXI master unit is axi_config.m_clock_source = TLM;
tested by a TLM phase-level AXI slave. Any level of AXI verification axi_config.m_reset_source = TLM;
axi_config.m_write_data_before_addr = 0;
can be performed easily by executing the run_test or do_test OVM
axi_config.m_write_addr_to_data_mintime = 0;
methods. The example also includes QVL AXI assertion monitors that axi_config.m_write_data_to_addr_mintime = 0;
enable users to complement simulation with formal verification. axi_config.per_instance =0;
axi_config.coverage_name =”coverage”;
The top-level environment env instantiates the configuration class axi_config.this_axi_if=axi_if;
(axi_vip_config), the environment class (sample_environment), the axi_config.configure_interface(okay);
set_config_object(“*”,AXI_env_config,axi_config,0);
QVL assertion monitor (qvl_monitor_wrapper), and the design as a env = new(“env”);
RTL master (verilog_master_wrapper). It also configures the various if(okay == 1’b0)
components in the initial block.Below is the sample_environment ovm_report_error(“env”,”Could not configure interface. Test will not run”);
else
class used in our example. It shows some of the internals of the AXI
env.do_test();
end
MVC. The MVC library provides a number of examples of typical use- endmodule
case scenarios using the various components, interfaces, and OVM
methods.
20
read_addr_wait.axi_if = axi_if;
class sample_environment #(int ADDR_WIDTH = 8, int RDATA_WIDTH = 8, int write_data_wait.axi_if = axi_if;
WDATA_WIDTH = 8, int ID_WIDTH = 4) extends ovm_env; slave.axi_if = axi_if;
slave.axi_write_address_port.connect(
axi_phase_responder #(ADDR_WIDTH,…) responder; responder.axi_phase_write_address_export);
axi_write_address_wait #(ADDR_WIDTH,..) write_addr_wait; …
axi_read_address_wait #(ADDR_WIDTH,…) read_addr_wait; endfunction
axi_write_data_wait #(ADDR_WIDTH,…) write_data_wait;
axi_monitor #(ADDR_WIDTH, ….) monitor;
axi_wait_coverage #(ADDR_WIDTH, ….) wait_coverage;
axi_transaction_coverage #(ADDR_WIDTH, …) transaction_coverage;
axi_phase_coverage #(ADDR_WIDTH, …) phase_coverage;
The MVC makes it easy to add user-specific scenarios and control the
axi_phase_slave #(ADDR_WIDTH, …) slave;
axi_phase_scoreboard #(ADDR_WIDTH, …) scoreboard; randomization made. A number of SystemVerilog tasks are provided.
typedef axi_vip_config #(ADDR_WIDTH, …) axi_vip_config_t; For example:
virtual AXI #(ADDR_WIDTH, …) axi_if;

// create random write transactions the following AXI MVC task can be used
task rand_write_transaction(
input this_axi_request_t gen= null,
The user typically attaches design-specific coverage objects to the output this_axi_response_t resp
monitor using the standardized features of OVM. );

The OVM build() method creates the various verification // controlled read transaction used for directed tests
task put_read_request(
components.
input addr_t address,
input id_t id,
input axi_len_e len,
function void build(); input axi_size_e size,
super.build(); input axi_prot_e prot,
responder = new (“responder”, this); input axi_cache_e cache,
write_addr_wait = new (“write_addr_wait”, this); input axi_lock_e lock,
read_addr_wait = new (“read_addr_wait”, this); input axi_burst_e burst
write_data_wait = new (“write_data_wait”, this); );

endfunction

OVM connect() connects the components. Note that the AXI virtual LOGGING, VISUALIZATION, AND DEBUG
interface is already in the configuration class and can be retrieved WITH AXI MVC
through the OVM configuration mechanism. After compilation, the testbench initializes and the verification starts.
If the AXI interface was configured successfully, It will be flagged by
“bit ‘okay’.” As verification progresses, the MVC logs AXI transaction
function void connect(); activity to the Questa transcript with all relevant details; such as, AXI
axi_vip_config_t axi_vip_config_local;
burst mode, address, and atomic mode. The OVM reporting functionality
super.connect();
axi_vip_config_local = get_axi_config(); makes it easy to control and integrate the logged information into, for
axi_if =axi_vip_config_local.this_axi_if; example, the overall verification management environment.
monitor.axi_if = axi_if;
responder.axi_if = axi_if;
wait_coverage.axi_if = axi_if;
write_addr_wait.axi_if = axi_if;

21
The AXI MVC also includes a verification plan and an open source
# OVM_INFO @ 4565: env.monitor [AXI_TRANSACTION_MONITOR]
SystemVerilog coverage object, which the user can tailor to his or
(axi_transaction_mon) {
#} her particular application to get protocol specific coverage. Figure 5
# shows the verification plan and coverage model loaded into Questa’s
# ------------------------------------------------------------------------------------------------ verification management environment.
# Name Type Size Value
# ------------------------------------------------------------------------------------------------
# <unnamed> axi_request - @{} 18446744073709+
# Transaction Type string 8 AXI_READ SUMMARY
# Address integral 32 ‘h170
# Transaction ID integral 2 ‘h1 With the AXI MVC, the OVM, and less than 100 lines of SystemVerilog
# Transaction Length string 13 AXI_LENGTH_16 code, users can now create a complete AMBA3, constrained-random,
# Transaction Size string 11 AXI_BYTES_4
AXI verification environment that can be reused to verify RTL and TLM
# Protection Mode string 17 AXI_NORM_SEC_DATA
# Cache Mode string 19 AXI_NONCACHE_NONBUF AXI master and slave units. More details on the individual capabilities
# Burst Mode string 8 AXI_INCR of the MVC and the Questa Verification Platform can be obtained by
# Atomic Access Mode string 10 AXI_NORMAL contacting your local Mentor Graphics sales office. In an upcoming
# ------------------------------------------------------------------------------------------------ article we will show how easily the environment can be extended to
include verification of AHB and APB AMBA sub-systems, creating a
complete, reusable, constrained-random, AMBA SoC verification
environment using OVM.
The MVC also extends Questa’s transaction displaying
capabilities with AXI protocol specific debugging, see figure
4. The MVC keeps track of any AXI protocol activity (for
example, transactions started and expected responses)
between any layers of the AXI protocol, enabling the user
to always work at the highest level of abstraction. However,
when unexpected errors occur, determining the cause and
effect between high-level transactions and low-level pin-
activity is greatly improved.

22
What is a Standard?
by Mark Glasser, Mentor Graphics Corporation & Mark Burton, GreenSoCs

The engineering and computing worlds are filled with standards— De facto standards typically have a loyal group of users who rely on
from COBOL to SystemVerilog, from RS232 to AMBA. As engineers, them in their work and want to see them remain viable and stable. They
not a day goes by when we don’t apply a standard of some sort in our must be developed quickly and be responsive to change. In contrast, a
work. de jure standard is typically developed by a committee that deliberates
over the details and produces a document or other deliverable (such
What makes a standard a standard? The simple but maybe not so as a reference implementation), which represents the results of their
obvious answer is that something is a standard if everyone agrees it is. deliberations. In both cases, the discussion and development may
Is that enough? Who is everyone? To answer these questions we’ll take follow more or less rigid rules. For instance, development in the Debian
a brief look at a few standards and see how they came to be considered branch of the Linux community is quite tightly managed even though the
standards. Linux operating system is a de facto standard.
In the functional verification world, the Open Verification Methodology A de facto can become a de jure standard. For example, Cadence’s
(OVM) was recently released as a joint production of Cadence Verilog language, which started as a sponsored de facto standard, is
Design Systems and Mentor Graphics Corporation. As a verification now an agreed upon de jure standard, ratified by the IEEE. In fact, this
methodology for SystemVerilog users1, OVM generated a lot of buzz at progression through the types of standards is quite common.
the recent DVCon conference in San Jose, CA. Although just released
the first week of January 2008, as of the end of May, over 3000 people
have downloaded copies. In this article we show parallels between EXAMPLES OF STANDARDS
OVM and other well known standards and argue that OVM is on the
VHDL is an example of a de jure standard. In the early 1980s, the
same trajectory toward standardization.
US government was looking for a vendor- and tool-independent HDL to
enable second sourcing of IC designs. The development of the VHSIC
TYPES OF STANDARDS (Very High Speed Integrated Circuit) Hardware Description Language
was commissioned, and VHDL IEEE-1076-1987 was mandated by the
There are four types of standards2, which break into two main US Department of Defense in document DOD 454 as the medium for all
categories — de facto and de jure. De facto standards are either IC designs to be delivered as part of military contracts3.
sponsored by a company or organization or un-sponsored. When a
company makes a donation of previously proprietary technology as a Linux, a well known de facto standard, started its life as MINIX; a
standard, this would be a sponsored standard. Verilog is an example of simple, UNIX-like operating system written by Andrew Tannenbaum for
a sponsored standard in the EDA domain. It was initially developed by the purposes of educating his students about how operating systems
Gateway Design Automation and then Cadence Design Systems. It was work4. Tannenbaum made the source code (in C) freely available. Linus
a de facto standard prior to Cadence’s sponsorship of it as a standard Torvalds was one of the people who downloaded MINIX and began to
to OVI (which later became Accellera). Linux, as we will see later, is an tinker with it. By 1991, he wrote his own MINIX-like OS and released
un-sponsored de facto standard. it as open source code under the GNU GPL license. Linux has since
grown and become an industrial-strength body of code upon which
De jure standards can either be legally enforced or simply agreed countless applications (and fortunes) have been built.
upon. For instance, it is illegal to sell an electrical appliance in the UK
that does not have a “standard” compliant plug on it, in accordance with Like Verilog, the C programming language has lived its life both as a
BS 1363-4:1995. On the other hand, ASCII, the character set used by de facto standard and a de jure standard. The C language appeared in
most computers, is an ANSI de jure standard; yet it would be perfectly 1973 as a derivative of BCPL5. In 1978 the book The C Programming
legal to sell a computer that did not use it (though Lyndon B. Johnson Language6 was published after C had already been in use for 5 years. It
would not have allowed the US government to buy such a machine). wasn’t until 1983 that ANSI formed the XJ311 committee to build a ratified
However, very few, if any, actually do. standard for C. The committee finished its work in 1989, producing the

23
X3.159-1989 standard for the C language. In the approximately 16 years 1. They provide value to their users
of its existence before becoming a ratified standard C was already one
2. They are easily accessible and applicable (well documented)
of the most important programming languages ever invented. By the
time the standard was completed many millions of lines of C code were 3. And…they are fun!
in use in production systems all across the world, many books on C
were published, and the C language was already a staple of college and C and Linux, along with standards such as HTML, SMTP, XML, DNS,
university computer science and engineering programs. and a long list of many others, became de facto standards without any
organization behind them or long before any standards body became
interested in them because they captivated the imaginations of their
KEYS TO A SUCCESSFUL STANDARD users, making them fun to use!
How did C and Linux become pervasive as standards without the
People in search of solutions for various kinds of engineering
benefit of a recognized standards body behind them? Conversely, would
problems, upon learning about these incipient standards, had an “aha!
VHDL have even existed, much less enjoyed any popularity as a design
moment.” They quickly realized, standard or no, that these facilities
medium, had its use not been mandated by the US Government?
provided value. They didn’t require a standards body or a government
Within the story of C is a lesson about why some so-called standards mandate to tell them this was something they needed.
fail to wear that mantel. In 1975, the US department of defense set up
For example, The C Programming Language is highly readable, a
a “standards” organization called the High Order Language Working
departure from the typical compiler reference manual of the time. The
Group. The intent was to devise a language for use within the US
book presents a straightforward model of the language which is easily
government for all embedded systems. The language ADA was the
grasped. Readers could become reasonably proficient in C through
result. But while ADA was a modestly popular programming language,
self study. It’s not clear whether the popularity of the book caused
touted as self-documenting and highly error resistant, the “standard”
C compilers to proliferate or whether the availability of the compiler
was short-lived as the more popular C became the de facto standard.
motivated people to seek out the book. In either case, the openness of
As we have said at the top of this article, something is only a standard
the language definition via the book and the freely available compilers
if everybody agrees it is. Sowa describes this phenomenon in his short
contributed to the widespread proliferation of C.
article “The Law of Standards.”7
The latter is another characteristic de facto standards typically have
Sponsored standards often face an uphill battle for community
in common: they are freely available, often in open source form. It
acceptance due to a perception of openness or lack thereof. Consider,
is likely that we would never have heard about Linus Torvalds or his
for example, Microsoft’s proposal of OOXML as an open, de facto
operating system had potential users not been able to download a copy
standard A quote from Sarah Bond, platform strategy manager for
and use it. Not only could they download it freely, but because they had
Microsoft, rather understates the case. She said, “Perhaps Microsoft
the source code, they could port it to various machines, augment it with
hasn’t communicated as best as it could have about the openness of
new features, and fix bugs. They could not only get excited about the
OOXML”8.
idea of a free UNIX-like operating system, they could control it and apply
Acceptance of a standard requires the perception, as much as the it to suit their needs.
reality, of accessibility. The fear of “lock in”—whether real or imagined—
can be as damaging to a fledgling standard as poor licensing. Of course
this is an old story. In 1975, Sony tried to introduce a sponsored de
BIRTH OF A NEW STANDARD?
facto standard. It offered the standard to its rivals, and while there were Clearly, OVM has captured the imagination of the verification
reports of licensing issues, what caused the demise of beta-max over community. How can we account for this excitement? After all, it’s not
the inferior VHS format seems to have more to do with perception than the first entry in the SystemVerilog verification methodology arena.
anything else. VMM and AVM have been available for several years and each has
enjoyed success within the verification community.
As to why C and Linux became pervasive as standards without the
benefit of a recognized standards body, we can observe from history It is precisely because the OVM shares many characteristics of
that there are at least three key features of a successful standard. well known de facto standards, such as C and Linux. It is available in

24
open source form under the Apache-2.0 license. This license provides NOTES
protection for the copyright holders but imposes very few restrictions
on its licensees. As with Linux, users can control it and modify it to suit 1. OVM World, http://www.ovmworld.org/
their needs. 2. Takanori Ida, Evolutionary stability of de jure and de facto
The OVM is supported through a unique collaboration between standard, Konan University, Faculty of Economics, http://www.econ.
Cadence and Mentor Graphics, two of the largest producers of kyoto-u.ac.jp/~ida/3Kenkyuu/3Workingpaper/WPfile/2000-2001/
verification tools. Thus, OVM is not proprietary to any one company, standards.pdf
which is an attractive proposition to many users. It is a sponsored de 3. Coehlo, David R. The VHDL Handbook, Springer, 1989
facto standard in the making. As such, we can expect it to be dynamic
and its development swift. Companies that are reluctant to invest in 4. Hasan, Ragib, The History of Linux, 2002, Department of
writing code using a proprietary language or library can now avoid Computer Science University of Illinois at Urbana-Champaign, https://
the problem of feeling “locked in” to a particular vendor when they use netfiles.uiuc.edu/rhasan/linux/
OVM.
5. Ritchie, Dennis, M. The Development of the C Language, 1993,
The recent formation in Accellera of the Verification Intellectual http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
Property Technical Subcomittee (VIP-TSC) does nothing to alter the
6. Kernighan, Brian R, Ritchie, Dennis M. The C Programming
trajectory of OVM’s rise as a de facto standard. Because OVM is open-
Language, 1978, Prentice Hall, Englewood Cliffs, NJ
source OVM will follow the trajectory of similar open source tools. Like
the C language, OVM will ultimately be strengthened by a standardization 7. Sowa, John F. , The Law of Standards, http://www.jfsowa.com/
effort by providing short-term interoperability and a longer-term migration computer/standard.htm
path from other, perhaps proprietary, methodologies to the OVM.
8. ZDNet, Proprietary past looms over Microsoft OOXML
The inevitability of OVM becoming a de facto standard for building hopes, February 28, 2008, http://news.zdnet.co.uk/
testbenches is almost assured based on a review of the history of itmanagement/0,1000000308,39359136,00.htm
computing standards. History shows us that the standards that have
thrived are those that effectively solve a common computing problem,
are not proprietary, and are open source. Clearly OVM has achieved
the market presence and momentum reflective of this pedigree, and
the formation of the VIP-TSC is further evidence of this. Community
participation in the formulation of a standard will protect users’ legacy
investments while facilitating the growth of OVM as the de facto standard
verification methodology.

Mark Glasser is a verification technologist at Mentor Graphics and


Mentor’s OVM project leader. He is also one of the developers of AVM
and the primary author of the AVM Verification Cookbook. Mr Glasser
can be reached at mark_glasser@mentor.com.

Dr. Mark Burton is the founder and Managing Director of GreenSoCs.


He is the chair of the OCP-IP SLD WG and is active in the ESL,
SystemC, and Open Source communities. Dr. Burton can be reached at
mark.burton@greensocs.com.

25
Firmware Driven OVM Testbench
by Jim Kenney – SoC Verification Product Manager, Mentor Graphics

The Open Verification Methodology promotes a well defined implicit-access mode eliminates the need to call a function that explicitly
SystemVerilog transaction-level interface, inviting integration of a host sends read/write transactions to the hardware. Implicit access enables
of verification technologies. Firmware has proven effective for functional the same code to be run in simulation and on a live target without
verification of embedded hardware, so it follows that OVM integration modification.
of a firmware execution environment will advance the verification of
Running firmware native on the host is the fastest mode, executing
embedded systems. To this end, Mentor Graphics has added OVM-
at Pentium speeds of ~2 GHz. However, it’s worth noting that the logic
compliant interfaces to the Seamless® HW/SW co-simulation tool. This
simulator, even at the transaction level, is dramatically slower than most
article covers the firmware execution modes, supported processors,
firmware execution modes and will dominate overall runtime.
and interface points of the Seamless/OVM integration.

MAKING SEAMLESS OVM COMPLIANT TARGET INSTRUCTION-SET SIMULATOR


Seamless supports HW/SW co-simulation by coupling a hardware Host-code execution achieves speed by abstracting away any notion
and software execution environment. Hardware is modeled in the logic of the embedded processor. A more realistic approach is to run code
simulator while software is typically run on a target instruction-set on a cycle-accurate ISS. Here the target compiler is used to create a
simulator (ISS) for the embedded core. The key to accelerating software binary that executes on the ISS, which accurately models structures like
execution is to isolate it from the much slower logic simulator. Much of cache, pipelines, and tightly coupled memories. With an ISS, the user
the Seamless functionality is dedicated to this isolated, yet integrated, can choose to simulate instruction fetch and memory reference cycles in
SW/HW execution. the hardware description or to speed execution by hiding them from the
logic simulator. A cycle-accurate ISS will typically run 100K instructions/
The construction of Seamless processor models is key to this
sec, much slower than running native on the host. ISS performance
isolation/integration and makes possible the addition of OVM interfaces.
becomes a runtime issue only if your hardware description simulates
Processor function is split down the middle with transactions connecting
above this speed. It requires a very abstract hardware representation to
the two halves. The ISS executes code while the Bus Interface Model
achieve 100K clocks/sec. Most hardware simulations run at a fraction
(BIM) converts these transactions to pin-level bus cycles. Most
of this speed.
processors have multiple buses; a BIM for a given core supports all of
its bus interfaces. To make Seamless OVM compliant, we’ve tapped
into the transaction interface between the ISS and BIM. You can receive
FIRMWARE DEBUG
OVM transactions from firmware executing on the ISS, or you can
connect an OVM testbench and drive transitions to the BIM where they Source-level SW debug is available for all software execution
emerge as pin-level bus cycles. modes. Since host-code runs native on the workstation, you can use
your debugger of choice. Most instruction-set simulators include the
EDGE source-level debugger. ARM models are also integrated with the
HOST-CODE EXECUTION RealView debugger.

There are two primary ways to execute firmware in this environment: In general, RTL processor models don’t support source debug, but
host-code execution and target instruction-set simulation (ISS). With Questa Codelink includes patented technology that delivers source-
host-code execution, the firmware is compiled to run native on the level debug for MIPS and ARM RTL processor models. Codelink adds
workstation, which most likely would be a Pentium-based machine. software debug to the ModelSim/Questa user interface and connects
Seamless includes an exception handler in the form of a shared library directly to existing VMC and RTL models as delivered by ARM and
that is linked with your firmware. Each time your code initiates a read/ MIPS. Figure 1 features a screen shot of Questa Codelink connected to
write access to the hardware, an exception will occur, which Seamless an ARM 926 Design Simulation Model (DSM).
intercepts and routes to the design modeled in the logic simulator. This
26
ACCELERATED MEMORY ACCESS
At the heart of Seamless is a Coherent Memory
Server. This device acts as a unique storage array for
memories modeled in the logic simulator. Memories that
are frequently accessed by the processor can have their
storage arrays mapped to this server in place of the logic
simulator’s native storage array. Firmware executing
on the host or ISS can access this storage thousands
of times faster than a comparable simulation memory
cycle. This not only speeds firmware execution, it enables
rapid initialization, loading, and examination of memories
modeled in the logic simulator. For example, a dual-port
video frame buffer that is loaded by the CPU and read by
a HW image processor can be loaded in zero simulation
time.

A user-defined memory map directs CPU memory


cycles to either the memory modeled in the logic simulator
or its storage array held in the Coherent Memory Server.
Figure 1: Questa Codelink source-level firmware debugger featuring
The map can be changed on the fly during simulation. A
Source, Callstack, Variable, and Register windows.
typical scenario is to send a few memory cycles to the logic simulator to
validate the memory subsystem. These cycles are slow but necessary
to insure the processor can access boot flash and main memory. From
INTERFACING SW GENERATED CYCLES then on, this address range is served by the Coherent Memory, speeding
WITH THE LOGIC SIMULATOR access by three/four orders of magnitude.
Read/write cycles generated by firmware and routed to the logic Coherent Memory is available to all of the stimulus methods. While
simulator can be applied as bus transactions or pin-level bus cycles. host code and an OVM testbench don’t fetch instructions and hence
Seamless includes a SystemVerilog 1.0 TLM interface to connect your don’t need that function accelerated, they do load buffers in the design.
transaction-level bus components. For a pin-level bus interface, detailed Memory contents can be loaded in zero simulation time and then
Bus Interface Models are available for most ARM, MIPS, Freescale, accessed by an IP block modeled in the logic simulator. Buffers loaded
IBM PowerPC, and Tensilica processors. Mixed interface simulation, or altered by the hardware can also be examined in zero simulation time
where some functions connect to the processor at the transaction-level since the testbench has access to the high-speed port on the Coherent
and others at pin-level, is also supported. (Figure 2). A user-defined Memory Server.
memory map is used to route read/write cycles to the desired interface;
therefore no changes to your firmware or
testbench are needed to swap between
transaction or pin-level stimulus.

Figure 2: Seamless OVM depicting


multiple firmware execution options,
transaction and pin-level stimulus,
and the Coherent Memory Server for
speeding firmware execution.

27
SUPPORTED PROCESSORS
Mentor has developed a comprehensive offering of Seamless pro-
cessor models. Because the OVM transaction interface is implemented
in the Seamless kernel and not in each processor model, all models can
be used with OVM. Supported cores include the following:

• ARM7, ARM9, ARM11, and Cortex families


• MIPS M4K, 4KE, 24K, 34K, and 74K
• Freescale PowerPC and PowerQuicc families
• Tensilica Xtensa and Diamond families

Each of these models includes an ISS, BIM, and graphical source-


level debugger. All can be used to generate OVM transactions from
embedded code and to convert OVM transactions to pin-level bus
cycles.

SUMMARY
Seamless enables straight-forward integration of firmware stimulus
into an OVM testbench. Multiple firmware execution modes provide
a continuum of performance/accuracy choices. Firmware-generated
stimulus can be applied to the design as transactions or pin-level bus
cycles. An OVM TLM testbench can be used to drive available Bus
Interface Models that convert transactions to pin-level bus cycles.
Seamless supports mixed-mode stimulus where an address map
applies processor cycles to the design as OVM transaction or pin-level
bus cycles. Source-level firmware debug is available for all processor
representations, including the Questa Codelink debugger for RTL
processor models.

28
Design Verification Using Questa and the Open Verification
Methodology: A PSI Engineer’s Point of View
by Jean-Sébastien Leroy, Design Verification Engineer & Eric Louveau, Design Verification Manager, PSI Electronics

ABSTRACT Verification Language. We will then explain how OVM and Questa
helps us to achieve better results while speeding-up the verification
This article discusses, from a design verification engineer’s point process by using the SystemVerilog DPI feature to replace the fully-
of view, the benefits of using Questa and the Open Verification functional LEON processor model and by re-using our HDL verification
Methodology in the verification process. This article shows how Questa IPs.
and its advanced features such as integrated verification management
tools and integrated transaction viewing, can help to achieve verification
plan targets. OVERVIEW OF NEW QUESTA FEATURES
Questa also enables custom flow integration, such as the PSI-E Following is a very short overview of the most interesting features
verification management flow. Coupling with a methodology like OVM, for System-on-Chip verification. Questa enables transaction viewing
it is possible to deliver a complete reusable verification environment in and recording (see figure 1), achieving easier debugging of bus and
an efficient way whatever the project. OVM is a standardized verification protocols.
methodology, enabling the verification environment to be reused, even
Questa comes with new verification management tools, enabling
in a different EDA vendor flow.
strong coupling between code coverage, functional coverage, simulation
OVM provides many verification components and gives the runs and verification plan. This tools support many plan file formats
verification engineer a way to think about how to verify and what to such as excel sheet or GamePlan file.
verify instead of thinking how to write verification components. The only
The latest version of Questa integrates the support for the Universal
things to code are drivers, which depend on the device under test. In
Power Format for power-aware simulation.
this article, we are using a simple example: the test of an UART IP
connected to the APB bus of a LEON-2 system-on-chip. We will
explain how verification was done before using OVM and Hardware

29
Figure 1-Transaction viewing in the wave window

PRESENTATION OF THE
OPEN VERIFICATION METHODOLOGY
The Open Verification Methodology is the result of joint
development between Cadence and Mentor Graphics. OVM is
a completely open library and proven methodology based on the
Universal Re-use Methodology from Cadence and on the Advanced
Figure 2-OVM library (source www.ovmworld.org)
Verification Methodology from Mentor Graphics. OVM has been
developed in order to provide true SystemVerilog interoperability and
to facilitate the development and usage of plug-and-play verification
IP written in SystemVerilog, allowing the communication of foreign AUTOMATED INTEGRATION OF THE
languages such as SystemC and e languages. OVM addresses some VERIFICATION PLAN IN THE VERIFICATION FLOW
issues encountered when using proprietary languages such as: PSI Electronics has developed a custom verification management
• Different language subset flow for easier and quicker verification goals achievement. This flow is
• Incompatible VIP interface based mostly on open source alternative, except for Jasper GamePlan
• Vendor-dependent library/methodology software which is not open source but is freely available for download.
• Prohibitive licensing The flow enables original and pleasant viewing of the verification plan
using mind-mapping viewing and icon classification and provides back
annotation of metrics results.
OVM provides a true open-source (Apache-2.0) library and
methodology written in SystemVerilog and that can run on any IEEE-1800 The PSI-E flow is composed of four steps:
compliant simulator. OVM ensures interoperability across simulators • Moving specification and features to verify in a verification plan
and with other high-level languages and enables verification IP “plug- in FreeMind
and-play” functionality. OVM is based on proven methodologies, URM • Converting FreeMind file format in GamePlan file format using
and AVM, therefore, incorporates best practices from more than 10 PSI-E XSLT style-sheet
years of experiences. • Running simulation and retrieving coverage results in GamePlan
OVM provides base class for verification environment elements (see using Questa UCDB
figure 2) such as: • Back-annotating results in FreeMind using XSLT style-sheet

• Monitor through the ovm_monitor class


• Driver through the ovm_driver class The first step of the flow uses FreeMind (see figure 3), an open
• Sequencer through the ovm_sequencer class source Java-coded mind-mapping software.
• Scoreboard through the ovm_scoreboard class
• Random generator through the ovm_random_stimulus class

30
The third step uses the Questa verification management feature
to define relations between verification plan items and functional/
code coverage results. This is easily done thanks to the verification
management tool of Questa, able to read a GamePlan verification
plan.

The last step consists of converting the annotated GamePlan


verification plan back into a FreeMind map. This is done by using one
more time an XSLT style-sheet.

Because all files are written using XML standard, it is possible to


export back-annoted verification plan into several different file formats
such as HTML for a wiki or Microsoft XML for a Word report. It is also
possible to imagine a completely automatic updating of a website or a
collaborative content management system.
Figure 3-A verification plan in FreeMind

EXAMPLE USED IN THIS ARTICLE


Each key feature is classified using icons: In this article, we will discuss previously described methods and
tools with the help of a concrete example based on a system-on-chip
A property is represented using a pencil icon
platform developed at PSI Electronics. This platform is based on the
A testcase is represented using a wizard icon
open-source LEON-2 processor from Gaisler, extended with custom IPs
A pass result is represented using an OK icon
designed at PSI Electronics like VGA controller or AES cryptographic
A fail result is represented using a cancel icon
core (see figure 5). Various operating systems have been ported by PSI
A coverage result is represented using a magnifier icon
Electronics to this platform such as eCos or Linux.
An assumption is represented using an idea icon
A question is represented using an help icon
A high-level requirement is represented using
a password icon

Notes can also be added to each feature using the post-it icon.

The second step is based on XML standard and conversion


functionality. XML provides a way to convert files to another format
using an XSL style-sheet based process (see figure 4).

Figure 5-PSI Electronics platform overview (source Gaisler)

We will concentrate on the UART APB slave in the following. The


UART is composed of an AHB slave interface, a serial port controller for
hardware flow control, a baud-rate generator and a four register, a hold
receiver and a shift receiver register for the receiver side and a hold
register and a shift register for the transmitter side (see figure 6).

Figure 4-XML conversion mechanism


31
HDL-BASED VERIFICATION
In order to verify our UART, we need to exercise some stimuli to the
IP. More precisely, we need to be able to send and to receive data at
different baud-rates and inject errors. This part of the job is done by
a Bus Functional Model, referred as BFM in the document following.
Bus functional models are simplified simulation functional models that
accurately reflect the behavior of a device without modeling its internal
mechanisms. One major reason to use a BFM to test the UART is that
when testing using loopback mode, receiver and transmitter share the
same clock and are in some sort synchronous. This can hide some
Figure 6-UART block diagram (source Gaisler) bugs. With a BFM, transfers are really asynchronous and baud-rate
generation bugs can be detected.

Our BFM is written in VHDL and can generate in a configurable


The UART contains a control register that enables the configuration manner all possible data frame of the protocol such as configurable
device and a status register that gives information about operation of baud-rate, odd/even/none parity bit, one/two stop bit. Because we need
the device such as break received, overrun, parity error, transmitter hold to exercise all type of stimuli to our device, our BFM can also generate
register empty and so on. false frame such as parity bit error, framing error or break. The BFM can
also send text files.
The UART supports data frame with 8 data bits, one optional parity
bit (odd or even) and one stop bit. To generate the baud-rate, the block The verification is done using a processor-driven methodology.
includes a programmable 12-bits clock divider. Hardware flow control We are assuming that the CPU is bug-less, or all following would be
is supported through the RTSN/CTSN hand-shake signals. The UART meaningless. Tests are written in ANSI-C in order to apply stimuli to the
includes a verification facility provided through a loopback mode that IP under test and to retrieve results.
connects internally RXD and TXD signals together.
Putting all together, we obtain a testbench that contains the system-
In order to test the IP, a verification plan has been defined in FreeMind. on-chip, memory files containing data and compiled programs that will
Each feature to verify, and corresponding testcase, have been written. be fed in the memories and the UART BFM (see figure 8).
Examples of tests exercised on the IP are verifying values contained at
the power-up, verifying read/writing of registers, verifying transmissions
for bud-rates and so on (see figure 7).

Figure 7-Verification map

Figure 8-Tesbench overview

32
The processor-driven methodology has many advantages. There time consumed to verify the design. Targeting 100% of code coverage
is no need to know or learn a verification language. You just need to gives a feedback on when we are done so that not all tests need to
have a processor and some C knowledge. By the way, the tests can be exercised to be sure that the IP has been fully exercised. Moreover
be reused in other platforms and can even be used in the real chip automatic results reporting reduces the amount of time spent to annotate
later. But there is a real tedious and disappointing thing with processor- the verification plan. In fact, all coverage and pass/failed results are
driven methodology. Because the processor is running, compiled C automatically reported in the right place in the verification plan and the
code feeds into a memory model instantiated in the top level testbench, end of the tests. Then, any format of document can be generated: a
there is no way to add breakpoints in the source code, no way to view a back-annotated FreeMind map, a word report and even an automatic
variable content, and by the way no source-level debug. This can be a wiki report.
real problem when verifying complex IP such as Ethernet PHY or MP3
But VHDL isn’t a verification language and setting up a complex
decoder, especially when using pointers or complex calculation in the
environment for a complete SoC is a pain. For example, replacing
source code. So, before verifying the IP, we need to first debug our tests
the processor by a bus functional model requires a lot of coding and
and this process can take long time.
debugging time. VHDL doesn’t provide transaction-level modeling nor
As a feedback on our job, we are using code-coverage metrics that a simple way to do constraint random verification. But all of these can
tell us if we are done testing the device or not. Code-coverage is a first be done using VHDL if desired, but at the price of consuming a lot of
available metric providing a measure of quality test and remaining job. time, not to mention forgetting about re-use. What you will finally have is
a sort of custom and proprietary verification language based on VHDL
Referring to the verification map, it is evident that some conditions and command files that only your BFM can work with.
cannot be tested or cannot be easily tested such as low baud-rates.
Moreover, a huge number of tests need to be exercised in order to
exhaustively test the IP. The total number N of tests can be calculated VERIFICATION USING HVL AND OVM
as follow:
On the other hand, OVM greatly helps verification efficiency and
productivity by enabling verification IP re-use while providing a very
powerful environment. A good way to start with OVM is to read the
N = (12 bits divider) x (8 data bits) x (None, Odd, Advanced Verification Methodology book from Mentor Graphics as well
Even parity bit) x (loopback) as the Universal Re-use Methodology guide from Cadence.

The first step is to move the VHDL top level testbench to SystemVerilog.
This can be quickly achieved by re-using all VHDL models (memory,
Replacing values, we have a number N of tests:
bfm …) and re-instantiating them into a SystemVerilog top. Re-using
components is really a great deal because it speeds up the process
to have a working environment while enabling one to re-use working
N = (1212) x (28) x (3) x (2) = 4096 x 2 x 256 x 3 = 6,291,456 complex models that are already coded. Here, a sanity check can
be launched to ensure that all is working properly like the old VHDL
testbench.
We need to exercise more than 6 million different tests. We haven’t
The next step is to setup OVM. An OVM environment is basically
calculated the time necessary to complete this test but you can imagine
made by inheriting provided classes. The only class to overload is the
how long it would be, especially when entering low baud-rate tests such
ovm_driver one to adapt it to the device(s) under test. Thanks to the
as 1200 bps, where a bit-period is 833μs.
object-oriented structure of SystemVerilog, only the printing method and
Using our flow, we can greatly reduce the number of tests to be low-level signals driving methods need to be coded, since other useful
exercised and, by the way, the time needed to simulate. Tests can functions are already being provided by the class members methods.
first be reduced by constraining the search space. For example, only
So we have here a first SystemVerilog/OVM environment. But in this
standard baud rates such as 9600 bps, 19200 bps, 57600 bps and so
environment, we are always using VHDL models, such as memory
on are interesting to be verified. Then setting-up RTL code coverage
models, to have the system-on-chip running. While OVM enables re-
and integrated PSI-E flow helps to reduce the number of tests and the
33
using VHDL verification components, it would be better to have them CONCLUSION
all coded in SystemVerilog, especially due to limitations inherent to
VHDL such as no hierarchical path. A way to have a SystemVerilog only Mixing OVM, SystemVerilog and Questa together is a powerful and
environment would be to rewrite all memory models, all clock generators, efficient way of achieving reliable design verification. Questa provides
all verification IP and all bus functional models in SystemVerilog but the verification engineer with many features that help debugging
this is a time consuming process due to coding and debugging new and managing simulation such as transaction viewing, memory
models. content viewing and modifying, custom radix definition or verification
management tools. With these features, it is possible to integrate a
Because we are looking for ways to speed up the process while custom flow, such as the one used by PSI Electronics, and in that way
maintaining high verification quality and re-use, at PSI Electronics we to achieve automatic tasks.
have sought to reduce the time spent writing code. We have found a
solution by keeping our UART BFM, in order to reduce the time we’d As shown, using FreeMind and XML standards permit you to build
spend to write a new one, with the SystemVerilog Direct-Programming friendly verification management flows. About the methodology and the
Interface. Using DPI, we just need to write a master AHB bus functional mechanism: HDL shows its limits quickly when thinking about re-using
model which will be driven with already-coded C tests (almost the same or complex devices-under-test. It is always possible to develop complex
one that was used in the VHDL testbench). In that case, the verification environments and to make complex scenarios evolving random-
process and environment setup time is greatly reduced because we constraint verification or transaction-level modeling but this is costly.
don’t need processor program memory models for example. For this reason, OVM and SystemVerilog is a good alternative. OVM
provides a ready-made solution to set up a complex but reliable and
By the way, we do not need to write and debug all models. Just the re-usable verification environment. OVM is shipped with a huge number
minimum necessary models are translated into SystemVerilog. In that of methods and components that enable the verification engineer to
case, only the AHB BFM needs to be programmed. So, we are working quickly develop complex and reliable verification environments and to
with a verification environment that is bug-free quickly, enabling faster setup complex scenarios on a complex design.
and better verification results. We can then concentrate on translating
all our models into SystemVerilog while being able to achieve good
verification in the mean time.

34
A Practical Guide to OVM – Part 1
by John Aynsley, CTO, Doulos

INTRODUCTION GETTING INTO THE OVM CODE


This is the first in a series of articles helping you get started with The OVM 1.0.1 release includes two top-level directories, ./src and
OVM, the Open Verification Methodology for functional verification using ./examples, which contain the source code for the OVM library and
SystemVerilog. OVM is supported by a library of SystemVerilog classes. a set of examples, respectively. The source code is structured into
The emphasis in these articles is on getting your code to run, while at subdirectories, but you can ignore this for now. The ./src directory
the same time coming to understanding more about the structure and contains a number of header files supporting several compilation
purpose of the OVM classes. strategies.

OVM was created by Mentor Graphics and Cadence based on existing In order to compile OVM applications using Questa, the approach we
verification methodologies originating within those two companies, recommend is:
including Mentor’s AVM, and consists of SystemVerilog code and
• to add ./src to the include path on the command line, that is,
documentation supplied under the Apache open-source license. The
+incdir+/.../src
official release can be obtained from the website, www.ovmworld.org.
• to add ./src/ovm_pkg.sv to the front of the list of files being compiled
The overall architecture of OVM is well described in the Datasheet and
• to add the following lines to your various SystemVerilog files
White Paper available from that website.

This article assumes you have at least some familiarity with


SystemVerilog, with constrained random simulation, and with object-
// At the top of each file
oriented programming. `include “ovm_macros.svh”

// Within any modules or packages


OVM IN A NUTSHELL import ovm_pkg::*;

If you currently run RTL simulations in Verilog or VHDL, you can


think of SystemVerilog and OVM as replacing whatever language and Make sure that if you put ovm_pkg.sv on the command line as
coding style you use for your test benches. But OVM test benches suggested above, you do not include the header “ovm.svh” in the
are more than traditional HDL test benches, which might wiggle pins source files.
on the design-under-test (DUT) and rely on the designer to inspect a
waveform diagram to verify correct operation. OVM test benches are The ./examples directory tree in the OVM release contains sample
complete verification environments composed of reusable verification script files that can be modified to compile your code.
components, and used as part of an overarching methodology of
constrained random, coverage-driven, verification.
THE VERIFICATION ENVIRONMENT
The key objectives of OVM are to enable productivity and verification
This article shows a very simple example including a design-under-
component reuse within the verification environment. This is achieved
test, a verification environment (or test bench), and a test. Assuming you
through the separation of tests from the test bench, through having
have written test benches in VHDL or Verilog, the structure should be
standardized conventions for assembling verification components,
reasonably obvious. The SystemVerilog code is structured as follows:
through allowing verification components to be highly configurable, and
through the addition of automation features not provided natively by
SystemVerilog. - Interface to the design-under-test

- Design-under-test (or DUT)

35
- Verification environment (or test bench)
interface dut_if();
- Transaction
- Driver int addr;
- Top-level of verification environment int data;
bit r0w1;
- Instantiation of stimulus generator
- Instantiation of driver modport test (output addr, data, r0w1);
modport dut (input addr, data, r0w1);
- Top-level module
endinterface: dut_if
- Instantiation of interface
- Instantiation of design-under-test
- Test, which instantiates the verification environment
Of course, a real design would have several far more complex
- Process to run the test
interfaces, but the same principle holds. Having written out all the
connections to the DUT within the interface, the actual code for the
Since this example is intended to get you started, some pieces of the outer layer of the DUT module becomes trivial:
jigsaw are missing, most notably a verification component to perform
checking and collect functional coverage information. It should be
emphasized that the purpose of this article is not to demonstrate the full module dut(dut_if.dut i_f);
power of OVM, but just to get you up-and-running. ...
endmodule: dut

CLASSES AND MODULES


As well us removing the need for lots of repetitive typing, interfaces
In traditional Verilog code, modules are the basic building block used
are important because they provide the mechanism for hooking up a
to structure designs and test benches. Modules are still important in
verification environment based on classes. In order to mix modules and
SystemVerilog and are the main language construct used to structure
classes, a module may instantiate a variable of class type, and the class
RTL code, but classes are also important, particularly for building
object may then use hierarchical names to reference other variables in
flexible and reusable verification environments and tests. Classes are
the module. In particular, a class may declare a virtual interface, and
best placed in packages, because packages enable re-use and also
use a hierarchical name to assign the virtual interface to refer to the
give control over the namespaces of the SystemVerilog program.
actual interface. The overall structure of the code is as follows:
The example shown here includes a verification environment
consisting of a set of classes, most of which are placed textually within package my_pkg;
a package, a module representing the design-under-test, and a single ...
class my_driver extends ovm_driver;
top-level module coupling the two together. The actual link between the
...
verification environment and the design-under-test is a SystemVerilog virtual dut_if m_dut_if;
interface. ...
endclass

class my_env extends ovm_env;


HOOKING UP THE DUT my_driver m_driver;
...
The SystemVerilog interface encapsulates the pin-level connections
endclass
to the DUT. ...
endpackage
module top;
import my_pkg::*;

36
dut_if dut_if1 ();
dut dut1 ( .i_f(dut_if1) ); class my_transaction extends ovm_transaction;

class my_test extends ovm_test; rand int addr;


... rand int data;
my_env m_env; rand bit r0w1;
...
virtual function void connect; function new (string name = “”);
m_env.m_driver.m_dut_if = dut_if1; super.new(name);
endfunction: connect endfunction: new
...
endclass: my_test constraint c_addr { addr >= 0; addr < 256; }
... constraint c_data { data >= 0; data < 256; }
endmodule: top
`ovm_object_utils_begin(my_transaction)
`ovm_field_int(addr, OVM_ALL_ON + OVM_DEC)
`ovm_field_int(data, OVM_ALL_ON + OVM_DEC)
`ovm_field_int(r0w1, OVM_ALL_ON + OVM_BIN)
The base classes ovm_driver, ovm_env and ovm_test will be `ovm_object_utils_end
discussed below or in later articles.
endclass: my_transaction
If you study the code above, you will see that the connect method of
the class my_test uses a hierarchical name to assign dut_if1, the actual
DUT interface, to the virtual interface buried within the object hierarchy The address, data and command (r0w1) fields get randomised
of the verification environment. In practice, the verification environment as new transactions are created, using the constraints that are built
would consist of many classes scattered across many packages from into the transaction class. In OVM, all transactions are derived from
multiple sources. The behavioral code within the verification environment the class ovm_transaction, which provides some hidden machinery
can now access the pins of the DUT using a single virtual interface. The for transaction recording and for manipulating the contents of the
verification environment does not directly refer to the pins on the DUT, transaction. The constructor new is passed a string that is used to build
but only to the pins of the virtual interface. a unique instance name for the transaction.

As transaction objects are passed around the verification environment,


TRANSACTIONS they may need to be copied, compared, printed, packed and unpacked.
The methods necessary to do these things are created automatically by
The verification environment in this example consists of three
the ovm_object_utils and ovm_field macros. At first, it may seem like
verification components, a stimulus generator, a FIFO, and a driver, and
an imposition to be required to include macros repeating the names of all
a fourth class representing the transaction passed between them. The
of the fields in the transaction, but it turns out that these macros provide
stimulus generator creates random transactions, which are stored in a
a significant convenience because of the high degree of automation
FIFO before being passed to the driver and used to stimulate the pins of
they enable.
the DUT. A transaction is just a collection of related data items that get
passed around the verification environment as a single unit. You would The flag OVM_ALL_ON indicates that the given field should be
create a user-defined transaction class for each meaningful collection copied, printed, included in any comparison for equality between two
of data items to be passed around your verification environment. transactions, and so on. The flags OVM_DEC and OVM_BIN indicate
Transaction classes are very often associated with busses and protocols the radix of the field to be used when printing the given field.
used to communicate with the DUT.

In this example, the transaction class mirrors the trivial structure of


VERIFICATION COMPONENTS
the DUT interface:
In OVM, a verification component is a SystemVerilog object of a
class derived from the base class ovm_component. Verification
component instances form a hierarchy, where the top-level component

37
or components in the hierarchy are derived from the class ovm_env. The ovm_component_utils macro provides factory automation for
Objects of type ovm_env may themselves be instantiated as verification the driver. The factory will be described in a later article, but this macro
components within other ovm_envs. You can instantiate ovm_envs and plays a similar role to the ovm_object_utils macro we saw above for
ovm_components from other ovm_envs and ovm_components, but the the transaction class. The important point to remember is to invoke this
top-level component in the hierarchy should always be an ovm_env. macro from every single verification component; otherwise, bad things
happen.
A verification component may be provided with the means to
communicate with the rest of the verification environment, and may
function new(string name, ovm_component parent);
contain a set of standard methods that implement the various phases super.new(name, parent);
of elaboration and simulation. One such verification component is the endfunction: new
driver, which is described here line-by-line:

The constructor for an ovm_component takes two arguments,


class my_driver extends ovm_driver; a string used to build the unique hierarchical instance name of the
component and a reference to the parent component in the hierarchy.
Both arguments should always be set correctly, and the user-defined
ovm_driver is derived from ovm_component, and is the base class constructor should always pass its arguments to the constructor of the
to be used for user-defined driver components. There are a number of superclass, super.new.
such methodology base classes derived from ovm_component, each
of which have a name suggestive of their role. Some of these classes function void build;
add very little functionality of their own, so it is also possible to derive super.build();
the user-defined class directly from ovm_component (or from ovm_ get_port = new(“get_port”, this);

threaded_component – see below). endfunction : build

The build method is the first of the standard hooks called back in each
ovm_get_port #(my_transaction) get_port;
of the phases of elaboration and simulation. The build phase is when
all of the verification components get instantiated. This build method
The get_port is the means by which the driver communicates with the starts by calling the build method of its superclass, as build methods
stimulus generator. The class ovm_get_port represents a transaction- always should, then instantiates the get port by calling its constructor
level port that implements the get(), try_get() and can_get() methods. new. This particular component has no other child components, but if it
These methods actually originated as part of the SystemC TLM-1.0 did, they would be instantiated here.
transaction-level modeling standard. The driver calls these methods virtual task run;
through this port to fetch transactions of type my_transaction from the forever
stimulus generator. begin
my_transaction tx;
#10
get_port.get(tx);

virtual dut_if m_dut_if;


ovm_report_message(“”,$psprintf(
“Driving cmd = %s, addr = %d, data = %d}”,
(tx.r0w1 ? “W” : “R”), tx.addr, tx.data));
The virtual interface is the means by which the driver communicates m_dut_if.r0w1 = tx.r0w1;
with the pins of the DUT, as described above. m_dut_if.addr = tx.addr;
m_dut_if.data = tx.data;
end
endtask: run
`ovm_component_utils(my_driver)
endclass: my_driver

38
The run method is another standard callback, and contains the main
behavior of the component to be executed during simulation. Actually,
run does not belong to ovm_component but to ovm_threaded_
component. Only threaded components have a run method that is
executed during simulation. This run method contains an infinite loop to
get the next transaction from the get port, wait for some time, then wiggle
the pins of the DUT through the virtual interface mentioned above.

People sometimes express discomfort that this loop appears to run


forever. What stops simulation? There are two aspects to the answer.
First, get is a blocking method. The call to get will not return until the next
transaction is available. When there are no more transactions available,
get will not return, and simulation is able to stop due to event starvation.
Secondly, it is possible to force simulation to stop, despite the existence
of such infinite loops, by calling the method global_stop_request or by
setting a watchdog timer using the method set_global_timeout.

The run method also makes a call to ovm_report_message to print


out a report. This is a method of the report handling system, which
provides a standard way of logging messages during simulation. The
first argument to ovm_report_message is a message type, and the
second argument the text of the message. Report handling can be
customised based on the message type or severity, that is, information,
warning, error or fatal. ovm_report_message generates a report with
severity OVM_INFO.

NEXT STEPS
In this article we have examined the OVM verification environment,
transactions and verification components. We will pick up the story again
in subsequent articles, where we will explore OVM simulation phases,
tests, configuration, sequences, and the factory. In the meantime, you
can download the source code for this and other examples from www.
doulos.com/knowhow.

39
Dynamic Construction and Configuration of Testbenches
by Mike Baird, Willamette HDL, Inc.

There are a number of requirements in developing an advanced In a standard testbench structure the top level object is the
testbench. Such requirements include making the testbench flexible environment class which embeds the stimulus generator which contains
and reusable because with complex designs in general we spend as the algorithm (test) for generating stimulus objects. This limits flexibility
much or more time developing the verification environment and testing when changing tests. To change a test a new or different stimulus
as we do developing the DUT. It has been said that any testbench is generator must be compiled into the testbench structure. The use of
reusable; it just depends upon how much effort we are willing to put into polymorphism or stimulus generators that gather information from files
adapting it for reuse! Given that however, there are concepts that go etc. may help but is not a complete answer.
into making a testbench that is reusable with reasonable effort.
Traditionally a hierarchical class-based environment is built using an
1. Abstract, standardized communication between testbench object’s constructor, a special function which creates the object. Higher
components level components create lower level components by calling the lower
2. Testbench components with standardized API’s level component’s constructor.
3. Standardized transactions
In this approach the initial block of the top level module (top) calls the
4. Encapsulation
constructor of the test environment class (test_env), which in turn calls
5. Dynamic (run-time) construction of testbench topology
the constructor of its child objects and so on down the hierarchy. Once
6. Dynamic (run-time) configuration of testbench topology and
construction is complete the simulation phases begin where connection
parameters
of components, running and post run processing is done.
7. Test as top level class
8 .Stimulus generation separate from testbench structure This approach limits the flexibility of creating testbench components
9. Analysis components as their types are determined before the simulation phases begin,
essentially at compile time. To change a component type a change to
We will primarily address dynamic (run-time) construction of testbench the structural code and a recompile is needed. Polymorphism helps but
topology and dynamic (run-time) configuration of testbench topology again is not a complete answer.
and parameters in this article. In doing so we will lightly touch on test
as top level class and stimulus generation separate from testbench Configuration information is passed from higher level components
structure as these are related topics. to lower level components through the constructor arguments. Things
such as structural topology, setting parameters (array sizes, constants
etc.) and operational modes (error injection, debug etc.) are examples
TRADITIONAL APPROACH LIMITATIONS of what may be configured in this manner.
A standard testbench structure has a top level module (top). Inside top
With multiple levels of hierarchy and potentially many parameters,
are the DUT (alu), DUT interface (alu_if) and a top level class (test_env)
using constructor arguments becomes difficult. It becomes hard or
which is the test environment that contains the testbench components.
even messy to pass parameters down several levels and if we have
more than 2 or 3 levels of hierarchy the problem becomes progressively
worse. This approach does not scale well when adding additional
configuration parameters.

OVM provides classes for use as testbench components and base


classes to derive testbench components whose semantics of use provide
a methodology which addresses the limitations described above.

Figure 1: Standard Testbench Top-Level Structure


40
TEST AS TOP LEVEL CLASS. OVM simulation has a number of phases that could be grouped into
three general activities.
OVM provides a means for separating the stimulus generation
algorithm (test) from the test bench structure through layered stimulus • Elaboration – phases for building and connecting
(scenarios) or sequences. This allows us to have a structure where a the testbench components.
“test” class is the top level object instead of the environment class. The • Run – phase for generation and application of stimulus
test or stimulus generating object is contained within this top level class and checking of results.
and is not a testbench component. In Figure 2 the MAC_scenario is a • Post run – phases for cleanup and reporting.
stimulus generation object. It is a scenario and implements a multiply In the methodology using the factory, a testbench component does
accumulate algorithm. not create a child component in its constructor. Rather all the testbench
components create child components dynamically in the build phase
using the factory. The build phase is part of the elaboration activities
and is the first post-new phase, that is it is the first phase that occurs
after the creation of the top level class. During the build phase, the top
level test class creates the environment class object using the factory,
which in turn creates its child components using the factory and so on.

During the run phase of simulation the factory may be used to create
Figure 2: Test as Top Level Class stimulus objects as opposed to calling the constructor directly to create
the stimulus object.

DYNAMIC (RUN-TIME) CONSTRUCTION


OF TESTBENCH COMPONENTS THE POWER OF THE FACTORY COMES TO LIFE
A well known object oriented best practice (pattern) is called the WITH FACTORY OVERRIDES.
factory pattern. A factory is a class that creates objects dynamically. Factory overrides may be set up to dynamically change the type of
Its main benefit is the type of object that is created is specified at run object the factory constructs. With a factory override the factory is
time. OVM provides a factory that can create any type of transaction or modified such that when one type of object is requested from the factory
any testbench component that is registered with this factory. The OVM it returns a second or substitute type of object instead. The second type
factory is not part of the testbench component hierarchy but rather a must be type compatible with the original type. Typically it is a derived
static object that exists outside of the hierarchical structure. type. An override may be further restricted by specifying not only the
type that will be overridden but the instance path where the type may
To register a class with factory requires the addition of a property and be replaced. This allows for instance specific overrides in addition to
a method in the class declaration. This property is a specialization of general type overrides.
a registry class (a library base class). The method (get_type_name())
that is added returns a string which identifies the type of the class. A factory override may be set up at any time during the simulation.
Optionally there is a macro provided for registration. Each class is Typically however they are set up during the build phase and most often
registered with an associated string which is used to lookup or index the in the build phase of the top level class prior to the construction of the
class in the factory. test environment.

Creation of a testbench component object is done by calling a factory


method (create_component()), while a second factory method (create_
object()) is provided for creation of stimulus objects. These methods
create the requested object and return a handle to the object to the
calling component.

Figure 3: Factory overrides in Test


41
In that arrangement the test customizes the test environment structure
to what is required by the test through factory overrides. An example
is the MAC_scenario test requires a driver transactor which provides
feedback from the ALU DUT to the MAC_scenario in order to implement
the multiply accumulate. Other tests that simply generate arithmetic
operations to the ALU may not require a feedback path from the DUT
and would use a different driver transactor. Each test can specify using
factory overrides which type of driver to create in the testbench structure. Figure 4: Test Environment Configurations in Test
Another example is the test may have an override which specifies the
type of stimulus object that is generated for the DUT. This approach
allows each test to be run, with each one dynamically configuring the
An example of the use of an object configuration data is to encapsulate
testbench structure using the factory and specifying the type of stimulus
a virtual interface. A container class may be created with a property of
object, without re-compiling!
type virtual interface (virtual alu_if). An object of this type is created in
the top level module and its virtual interface property set to the interface
DYNAMIC (RUN-TIME) CONFIGURATION OF (alu_if). This object is then placed into the global configuration data
TESTBENCH TOPOLOGY AND PARAMETERS and set to be retrieved by any class object, such as a driver or monitor
transactor that requires a virtual interface connection to the DUT. This
OVM has an alternate approach for providing information from a
avoids the messy approach of creating virtual interface properties at
higher level component to a lower level component which avoids the
all levels of the hierarchy and passing the virtual interface connection
need to pass information through constructor arguments. Indeed the
down through the hierarchy.
OVM factory does not allow for additional constructor arguments beyond
what is required by the library itself (a name and parent argument) so
this alternate approach is required when using the factory! CONCLUSION
The alternate approach provides for configuration data to be stored at The semantics of use of the classes in the OVM library define a
each level of the hierarchy. A higher level component influences a lower methodology which allows for dynamic (run-time) configuration and
level component by setting configuration data which may be retrieved construction of the test environment. This, together with other features,
and used by a lower level component. API methods are provided for provides for greater flexibility in testbench construction and a higher
setting and retrieving configuration data. degree of reuse.

Configuration data may be in the form of an integer value, a string or With the top level class being a test, using test environment
an object. An object may be used to encapsulate data that is not an configuration data together with test factory overrides provides each test
integer value or string. The higher level component stores the data with with the flexibility to configure the test environment to its requirements.
a unique lookup string and further specifies an OVM hierarchical path Once all the test bench component classes, stimulus classes and tests
relative to itself, specifying which of the lower level components in its have been compiled, different tests can be run without recompiling.
sub hierarchy is allowed to access this data. The lower level component
retrieves data using the lookup string. Its search for the data begins at
the top in the global space and proceeds down the hierarchical path
to the component. It retrieves the data at the first successful match it
finds. Wildcarding is allowed in both the lookup string and the path to
provide flexibility in setting and getting data.

Configuration data may be set at anytime during simulation. Typically


it is set and retrieved during the build phase. It is common to set
configuration data with regards to the testbench structure in the build
phase of the test class.

42
Mike Baird has 26 years of industry experience in hardware design
and verification. He is the author and an instructor of Willamette HDL’s
OVM training course. He has 15 years experience teaching design and
verification languages including Verilog, SystemVerilog, OVM, C++ and
SystemC. He is the founder of Willamette HDL (1993 – Present)

Willamette HDL, Inc. (WHDL) is a leading provider of design and


verification training in the U.S. and around the world. Founded in 1993,
it has an extensive offering of technology training for SystemVerilog, the
Open Verification Methodology (OVM), SystemC, Verilog and VHDL.
Willamette is based in Beaverton, OR (503) 590-8499, www.whdl.
com.

43
OVM Productivity using EZVerify
by Sashi Obilisetty, VeriEZ

INTRODUCTION dispersed teams. It is not enough to have unwritten rules and policies
for such projects. It is imperative that companies have corporate-wide
Verification of a chip is easily the most time-consuming task enforceable policies to meet aggressive time-to-market schedules while
confronting the product team. Increasingly, verification engineers are continuing to develop consistent and reusable code.
using innovative technologies and newer methodologies to achieve
satisfactory functional verification. SystemVerilog is fast becoming the EZVerify aims to enable efficient design and verification by providing
language of choice for implementing verification projects. Its rich set innovative tools that fit into a typical design flow, while targeting
of verification-friendly constructs, IEEE standard status, and universal specifically methods for detecting errors in design and verification code,
support across multiple vendor platforms warrants its overwhelming and for promoting reuse.
acceptance.

Verification-specific constructs in SystemVerilog include object- EZCHECK – STATIC ANALYSIS


oriented data structures, support for constrained random stimulus
generation, assertion specification and functional coverage modeling.
The Open Verification Methodology (OVM) uses the latest SystemVerilog
constructs to provide users with a powerful verification infrastructure.
OVM-based team will benefit greatly from productivity solutions that will
analyze user files for errors in use model/implementation as well as
provide ways to better understand OVM methodology and hierarchy.

VeriEZ’s EZVerify is a unique tool suite that offers OVM users a static
analysis tool (EZCheck) to perform over 30 OVM rule-checks and a
knowledge extraction tool (EZReport) to create persistent documents
that outline hierarchy and connectivity.

EZVERIFY – BACKGROUNDER
EZVerify is the industry’s first SystemVerilog productivity solution to
provide static analysis capability for design, assertions and testbench Figure 1 : EZCheck Use Model
capabilities as described in the IEEE 1800 standard. Its main capabilities
are:
EZCheck is a static analysis utility for SystemVerilog modules.
• Identifying coding errors early in the flow, giving beginner and On the design front, it can detect problems such as latch inference,
experienced users alike the opportunity to fix such errors with race-prone implementation and areas leading to synthesis/simulation
EZCheck, a programmable static linter for HDL-based modules discrepancies. On the verification front, it can detect problems such as
• Providing comprehensive documentation of design and erroneous usage of concurrency constructs, missing function output
verification information by analyzing the input modules (new, assignments, and several other serious errors that are not normally
legacy or external IP). uncovered without several hours of debugging.

EZCHECK: RULESETS
SystemVerilog projects are complex and the modules in a typical A collection of rules, or a ruleset, is the primary input to EZCheck.
HDL-based project can easily amount to thousands of lines of code. A rule can be thought of as a well-defined check; over and beyond the
In addition, projects invariably include multi-person geographically syntax and semantics of the language. For e.g., a missing semicolon
44
is a “syntax check” that will be identified by all parsers. Imposing • Value(s): Values are specified using the prefix ‘V’. Values may be
a restriction on the usage of certain constructs, however, is a “rule user-selectable or user-specified. An example of a rule with user-
check”. EZCheck comes standard with several rulesets, each of which selectable values is “SyncControl” with user-selectable strings
addresses best practices in various domains of SystemVerilog, such as “ALL”, “ANY”, “ORDER” or “CHECK”. For this rule, the user may
Functional Coverage, Assertions, Design and Verification. select one or more of the user-selectable strings as appropriate
controls for the “sync” call. An example of a rule with user-specified
values is “ProgramName”, which accepts regular expression
syntax for valid program names.

The following entries show examples of rule value specifications:

------------------------------------------------------------------------
#SyncControl rule with user-selectable string ‘ALL’
N:SyncControl
V:ALL

#ProgramName rule, should start with uppercase, and end with _prg
N:ProgramName
V:^[A-Z][A-Za-z]*_prg
------------------------------------------------------------------------

• Hint: The optional hint attribute is specified using the ‘H’ prefix. All
rules may have hints.

The following entry shows a rule with the optional hint attribute:

------------------------------------------------------------------------
Figure 2: Available Rulesets in EZCheck #SyncControl rule with user-selectable string ‘ALL’
N:SyncControl
V:ALL
H: Using ‘ALL’ options prevents deadlocks
Rulesets are ASCII files that can be created using any standard text ------------------------------------------------------------------------
editor. Rules are selected and customized in a ruleset. Rules can be
added and customized by setting appropriate values for various rule • Severity: The optional severity attribute uses the ‘S’ prefix and
attributes.: takes any string as its value. Typically, users will define a few
severity levels and assign these levels to the rules.
• Name: The name of the rule to be added; a different user-
supplied rule name may be bound to the predefined rule name. If a
The following entry shows rules with 2 severity levels, ERROR
user-supplied name exists, all violations and messages will use the
and WARNING:
user-supplied name.

------------------------------------------------------------------------
For example, the following entry binds the predefined rule #SyncControl rule with user-selectable string ‘ALL’
name ‘ValidPrintAndScanFunction’ to the user-supplied name N:SyncControl
‘SafeFunctionCall’. The prefix ‘N’ is used to identify the entry V:ALL
S:ERROR
as a rule name attribute.
#ProgramName rule, should start with uppercase, and end with _prg
------------------------------------------------------------------------ N:ProgramName
N:ValidPrintAndScanFunction:SafeFunctionCall V:^[A-Z] [A-Za-z]*_prg
------------------------------------------------------------------------ S:WARNING
------------------------------------------------------------------------

45
• Category: The optional category attribute ‘C’ is used to categorize
rules. Category names may be any user-specified strings. Typically,
user will predetermine the categories to be used and place rules in
appropriate categories.

The following entry shows rules in 2 categories, Naming and Coding.

------------------------------------------------------------------------
#SyncControl rule with user-selectable string ‘ALL’
N:SyncControl
V:ALL
C:Coding

#ProgramName rule, should start with uppercase, and end with _prg
N:ProgramName
V:^[A-Z][A-Za-z]*_prg
C:Naming
------------------------------------------------------------------------
Figure 3 : EZReport Use Model

EZREPORT – KNOWLEDGE EXTRACTION


EZReport creates a comprehensive document that summarizes a OVM PRODUCTIVITY WITH EZVERIFY
given HDVL module. It is useful to understand external verification IP EZVerify can enable productivity for OVM-based teams by checking
and legacy IP. Verification engineers can make reuse decisions by using for errors in OVM usage and providing an easy mechanism to understand
the comprehensive document created by EZReport. and apply the hierarchy present in OVM-based projects. A snapshot of
Users may also customize the document created to include additional the hierarchy created by EZReport is shown in Figure 4 (at right).
proprietary information such as links to specific files and illustrations. The rest of this section is devoted to EZCheck usage by utilizing the
Information provided by EZReport includes: OVM ruleset.
• Top-level view The OVM ruleset has four basic types of rules:
• Global variables and subroutines
• Object and method hierarchy Object Design Rules: The Object Design Rules are used to specify
• Class data, including constraints and coverage models methods (SystemVerilog tasks or functions) that should be implemented
for specific OVM objects for creating reusable verification modules.
Required methods can be specified for sub-classes of the following
EZReport also provides an interface to the popular Doxygen OVM objects:
documentation system.
• ovm_agent
• ovm_driver
• ovm_env
• ovm_monitor
• ovm_object
• ovm_sequence
• ovm_sequence_item
• ovm_sequencer
• ovm_scoreboard
• ovm_test
• ovm_transaction

46
Some of the errors highlighted by the EZCheck OVM rule-checker for
verification code are shown below:

//Environment Class created by the end-user

class xbus_env extends ovm_env;

// Virtual Interface variable


protected virtual interface xbus_if xi0;
Missing ovm_field_int
// Control properties macro calls to ‘intf_
protected bit has_bus_monitor = 1; checks_enable’ and
protected int unsigned num_masters = 0;
‘intf_coverage_enable
protected int unsigned num_slaves = 0;

// The following two bits are used to control whether checks


// and coverage are
// done both in the bus monitor class and the interface.
bit intf_checks_enable = 1;
bit intf_coverage_enable = 1;
Figure 4: OVM Hierarchy created with EZReport
// Components of the environment
xbus_bus_monitor bus_monitor;
xbus_master_agent masters[];
OVM Best Practices: The OVM Best Practices rules concentrate xbus_slave_agent slaves[];
on recommendations provided by OVM tutorials for putting together a
// Provide implementations of virtual methods such as get_type_name
robust and reusable verification environment. They include rule checks //and create
for recommended macro calls (for e.g., looking for the usage of the `ovm_component_utils_begin(xbus_env)
‘ovm_field_int’ macro). `ovm_field_int(has_bus_monitor, OVM_ALL_ON)
Monitor creation
`ovm_field_int(num_masters, OVM_ALL_ON)
`ovm_field_int(num_slaves, OVM_ALL_ON) missing recommended
Connectivity Rules: Connectivity rules focus on ensuring that valid
`ovm_component_utils_end ovm_factory:: create_
connections exist between the DUT and verification environment, as
component call
well as the various verification components. Rules include checks
super.build() missing
for ensuring that guidelines specified for the ‘connect()’ callback are
// build in build method
satisfied (for e.g., looking to ensure that producers and consumers are
function void build();
connected). string inst_name;

Verification Environment Rules: These rules are designed to provide if(has_bus_monitor == 1) begin
compelling value by checking for classic lint-style errors. Rules include bus_monitor = new xbus_bus_monitor(“xbus_monitor”);
checks for the following occurrences in SystemVerilog code: end

slaves = new[num_slaves];
• Missing calls to super.() method, when applicable
for(int i = 0; i < num_slaves; i++) begin
• Creation of related sub-classes using the recommended $sformat(inst_name, “slaves[%0d]”, i);
‘create_component’ method from the ovm_factory $cast(slaves[i], create_component(“xbus_slave”, inst_name));
• Global timeouts are specified for individual tests using end
endfunction : build
recommended OVM routines
String ‘xbus_slave” does
endclass
not match the name of the
‘xbus_env’ missing
corresponding component
a constructor
‘xbus_slave_agent’

47
CONCLUSION
Productivity is one of the most important criteria used by end-users
when evaluating new methodologies. OVM promises a high productivity
environment for verification engineers. Adoption of a new methodology
comes with its challenges. EZVerify provides a solid infrastructure to
OVM users, and can be used consistently throughtout the company to
accelerate adoption of OVM.

48
Requirements-Centric Verification
by Peet James, Sr. Verification Consultant, Mentor Graphics

What is a design requirement? verification, it remains difficult for most verification teams to define there
What is a verification requirement? generation stimulus, correctness checks (scoreboarding & assertions)
How do I define a generation stimulus sequence or scenario? and especially coverage groups, points, etc. Blank stares and sheer
How do I decide what to check in a scoreboard, and what to check boredom typically result when most verification teams call a meeting to
in an assertion? brainstorm and identify all these verification requirements. A novel way
How do I define coverage points? of extracting design requirements, and subsequently translating them
into corresponding verification requirements is being used successfully
by some groups.
In an ideal world, a design specification would detail all the neces-
sary requirements of the design under test (DUT). Yet, even if this were A design requirement is information about the functional operation of
the case, it would most probably still not clarify everything that is needed a design. It is something about the design that needs to be implemented
for a design team to implement the requirements, and for the verification by the designers before they will be confident in saying that the design
team to verify adherence to them. For instance, the design specification capture is complete. It is a subset, a microcosm, of the main specification.
might give general information about an interface, but a second protocol A verification requirement is the systematic set of generation stimulus,
specification might be needed to solidify the necessary details of that correctness checks, and coverage points corresponding to a specific
requirement. Even with multiple documents to reference, it is quite often design requirement. It is something about the design that both designers
still unclear and ambiguous. Typically, the necessary information is and verification engineers want to stimulate, check and cover before
in someone’s head and a series of discussions are needed to refine they will be confident in saying that the design has been verified.
the requirement for actual use. Often one big ambiguous requirement
The proposed path from design requirements to tangible verification
is broken up into several refined requirements. On top of this, some
requirements and ultimately actual verification code typically takes
requirements are not evident upfront at all, and they tend to develop
3 steps. Each of these steps take a certain amount of discipline and
only as you progress along with the project. No one had thought of
expertise:
them until the actual implementation was underway. Clearly, it would be
beneficial to have some structured way to gather and maintain design
requirements.
1) EXTRACTING DESIGN REQUIREMENTS.
After a decade of using high level verification languages (System This is the hard part; it has historically proven problematic to get
Verilog, Vera, e, etc.) which enable coverage-driven constrained-random engineers to sit down and go over all the documents, and all the
49
requirements need to be refined so that they are clear, prioritized so that
it is known which ones are critical and which ones are just ‘nice to have’.
The resulting detailed design requirements are then taken into actual
RTL by the design team, and they are also used to strategically drive
the verification effort.

Before moving on to step 2, let us look a bit closer at the


aforementioned design requirements database a bit closer. This
database is the first level of automation. It is a great way to keep track of
all the requirements. However, it needs a front-end, a way to capture all
the initial requirements - maybe by reading in a XLS spreadsheet - and
also an easy way to add new requirements and update existing ones as
needed. A GUI that allows the person who is entering the requirements
to fill as many of the data fields as possible, is a good idea. The resulting
database is much more powerful than a stand alone spreadsheet. The
information inside the database can be used to make custom reports,
like a report with all the requirements that are for a specific block, or
assigned to a given engineer, or of a specific priority. On top of this, the
database can be made with a standard interface so that it can readily
link to other tools via scripts. In this way we can back annotate useful
information (like coverage or assertion data) from simulation regression
information that is not written down, and put it in some sort of structured runs. This feedback information then can be reviewed in automated
format. Moreover, even if a team of engineers could and would do this at ways, to enhance and direct our verification effort. Now on to step 2.
the start of a project, requirements tend to change and evolve, making
the problem even more insurmountable.
2) TRANSLATION OF DESIGN REQUIREMENTS
Some helpful tools are available to grep through specification
documents, extracting out requirements for you, but they only work
INTO VERIFICATION REQUIREMENTS.
as a starting point, and only if the specifications are mature and well Armed with a spreadsheet or database of design requirements, the
written. There are also formal specification writing tools where the writer verification team needs to morph these detailed design requirements
includes certain strategically placed constructs that will later be used by into specific verification requirements and then actual verification code.
the tool to map directly to design requirements. These are mainly for Each design requirement might map to one verification requirement
mission critical applications where a super detailed trail of breadcrumbs or a whole bunch. It might map just to coverage, or more likely to
is needed in case the plane or space ship crashes. some generation, some form of checking and some coverage as well.
Coverage alone requirements are rare, and the hardship of gathering
These disciplined automated approaches are helpful, but they still coverage occurs often because people sit down to write coverage and
do not address the issues of requirements whose information is spread fail to make connections to actual design requirements. Start with the
over several documents, or for an absent information that is still only design requirement, translate it to something tangible in generation or
locked inside someone’s head. checking, and then take that into coverage. This is much more likely
to get you somewhere. This is the missing link. Sure it takes some
Extraction of design requirements needs to be done the old fashioned
expertise and know how to figure this out, but at least there is a concrete
way; people sitting down together and individually, pouring over
starting point. The previous way of doing this, was just to sit down
specifications, asking questions, and then logging the requirements in
and just come up with coverage points and generation scenarios and
some fashion, like a spreadsheet of some sort. These requirements form
checkers without any frame of reference. The prioritized, detailed list of
a database and often have other data fields besides the requirement
design requirements gives us this frame of reference. We can follow a
itself, added in other columns in order to trace the “what”, “where”
list of guidelines and principles, asking a set of questions (the what, the
and “who” information of the requirement. Once extracted, the design
how, the when, the where?) of each design requirement as we translate
50
them into verification requirements. We then enter and link these in our are extracted, refined and prioritized, often being reworked into several
spreadsheet and database, so that they can then be tracked, assigned more specific design requirements. The verification team then can
and implemented. translate and map these design requirements into specific verification
requirements. If this is done in a database format, such as with Questa’s
3) MAPPING VERIFICATION REQUIREMENTS Verification Manager and UCDB, they can use it to automate a bunch of
TO A VERIFICATION INFRASTRUCTURE: verification tasks. To make it all work however, an easy way to extract
The last part is to figure out specifically where the verification and keep these requirements up to date is needed. Each chip is unique
requirement will best go into my verification infrastructure. This is the and this makes automation, especially the extraction and refining phase,
details of what layer it will go at, what the names will be, the trigger, the difficult. But tools like Questa are there to help manage this important
signals. This is where, in generation, we map this to a generator layer part of the problem.
and a specific sequence or scenario. This is where, in checking, we
decide the specifics of how this will go into the scoreboard or assertion.
This is where, in coverage, we decide what coverage group it will go in,
and the actual coverage items will be, and the coverage goal. This is
most importantly, in coverage, where we capture, filter or splice common
data and create useful coverage information that can guide our further
verification effort, and not just create a ‘needle in a haystack’ situation
with just tons of data. Here also is where we create the database
feedback mechanisms that will hook up with our simulation regression
runs via links with coverage and assertions.

Example: Lets say my design spec outlines 3 control registers with


about 10 fields, all of which set up and control the chip. Along with
the general register field information, the spec outlines a list of rules
that these configurations are to follow. Stuff like, “you can’t go into this
mode after that mode”. Step one, is to take all this info, add any missing
info, refine them a couple of times into a list of corresponding design
requirements. Step two then would translate these into generation
requirements of making a generation engine to load up those registers
correctly and at the proper times, of adding checks to follow the rules,
and then coverage to see randomly what combinations we tried. The
third step would go into details of “how” to actually do this. For coverage,
this would be groups of configuration modes, what specific fields to
collect, what to cross and such.

Ideally, higher ups would write a totally awesome design spec, and
then would create this new detailed design requirements spec/database.
Then designers would go off and implement RTL, and verification
engineers would translate the design requirements into verification
requirements and then implement them. But, the reality is that it takes
everyone to create and maintain a requirements database. The give and
take, the discussions, the arguing, the struggle is often a very beneficial
part of the database development. It brings out problems early, which
is what verification is all about. Taking the upfront time to create and
maintain a requirements document/database, is proving to produce big
payoffs for both the design and verification efforts. Design requirements

51
Editor: Tom Fitzpatrick
Program Manager: Rebecca Granquist
Senior Writer: Todd Burkholder

Wilsonville Worldwide Headquarters


8005 SW Boeckman Road
Wilsonville, OR 97070-7777
Phone: 503-685-7000

Subscribe: http://www.mentor.com/
horizons

52

You might also like