2008 Ninth International Workshop on Microprocessor Test and Verification, 2008
Identifies redundant verifications in the brute-force simulation approach. Explores a methodology... more Identifies redundant verifications in the brute-force simulation approach. Explores a methodology to enhance verification efficiency. Randomized transactions are dynamically focused, selected based on DUT feedback and intrusively used to force DUT into states of interest. The methodology is applied in the verification of a DSP data cache.
This paper presents a Deep Learning (DL) based methodology to predict the Doppler shift in mobile... more This paper presents a Deep Learning (DL) based methodology to predict the Doppler shift in mobile communications. MATLAB simulations conducted on single-carrier, LTE and 5G systems, in the Doppler shift range of [0, 50, …, 550] Hz, attain a prediction accuracy at ~95%, a normalized mean squared error (nMSE) between [10−2, 10−3] and a prediction latency equal to the duration of a sequence of five inputs (e.g. frames, subframes and slots, respectively). Simulations are performed on random payloads (e.g., non data aided - NDA), time-domain and frequency-domain signals embedded with variable modulation types [QPSK, 16QAM, 64QAM], delay profiles (i.e., LTE's [EPA EVA ETU]), and signal-to-noise ratios (SNRs) [-10, 20] dB. The methodology utilizes a hybrid model of a Convolutional Neural Network (CNN) and a Long Short Term Memory (LSTM) and two techniques to enhance prediction accuracy, namely, input diversity and binary prediction.
This paper proposes Deep Learning (DL) for Signal Processing, with reviews and discussions of the... more This paper proposes Deep Learning (DL) for Signal Processing, with reviews and discussions of the three recent DL application advancements in wireless communication, which predict channel profile, Doppler shift, and signal-to-noise ratio (SNR) of LTE and 5G systems. MATLAB simulations are performed on time-domain and frequency-domain signals, emulating real wireless environments, with randomized payloads (e.g., non-data-aided), modulation types [QPSK, 16QAM, 64QAM], Doppler shifts [0, 50, .., 550] Hz, and SNRs [-10, 20] dB. The predictions are accurate at ~95%. The methodology consists of input diversity to empower multiple inputs per prediction, binary prediction to reduce prediction complexity and uncertainty, and a hybrid DL convolutional neural network and long-short-term-memory (CNN-LSTM) model to learn features in every input and across inputs. Additionally, the paper presents common lessons learned and future research directions. The designed methodology provides an effective...
2020 8th International Conference on Wireless Networks and Mobile Communications (WINCOM)
Deep learning (DL) is applied to predict signal-to-noise ratio (SNR) in de facto LTE and 5G syste... more Deep learning (DL) is applied to predict signal-to-noise ratio (SNR) in de facto LTE and 5G systems in a non-data-aided (NDA) manner. Various channel conditions and impairments are considered, including modulation types, path delays, and Doppler shifts. Both time-domain and frequency-domain signal grids are evaluated as inputs for SNR prediction. A combination of convolutional neural network (CNN) and long short term memory (LSTM) - CNN-LSTM - is used as the SNR predictor. Learning both spatial and temporal features is known to improve DL prediction accuracy. Techniques employed to enhance performance are SNR range/resolution manipulation, binary prediction, and multiple input prediction. Computer simulation is conducted using MATLAB LTE, 5G, and DL toolboxes to generate OFDM signals, model fading channels with AWGN noise, and construct CNN-LSTM. Simulation results show, with off-line training, DL based prediction of SNR in LTE and 5G systems has better accuracy and latency than traditional estimation techniques. Specifically, SNR prediction for SNR range of [-4, 32] dB and resolution of 2 dB utilizing time-domain signals has an accuracy of 100%, hence normalized mean square error (NMSE) of zero, and a latency of 1 millisecond or less.
SystemVerilog does not support multiple inheritance, function overloading, and class template, st... more SystemVerilog does not support multiple inheritance, function overloading, and class template, standard features in other OOP languages like Python and C++. Inheritance is the backbone of SystemVerilog and UVM testbench to enable horizontal and vertical reuse. Multiple inheritance is more reusable than single inheritance, allowing a subclass to inherit properties and methods from multiple superclasses. Function overloading and class template are nice-to-have but less useful in testbench design. Duplicating codes can provide the means to achieve multiple inheritance, function overloading, and class template. This research proposes two workarounds based on class composition and polymorphism without code duplication.
SystemVerilog assertions (SVA) and scoreboards have been effective checking instruments in design... more SystemVerilog assertions (SVA) and scoreboards have been effective checking instruments in design verification at signal and transaction level abstractions, respectively. SVAs work on internal DUT signals, checking protocols, and scoreboards work on DUT I/O transactions, checking DUT functional specifications. These two mechanisms are independently complimenting each other. However, they can leverage each other strengths to enhance overall checking productivity. This research investigates potential extensions of the two techniques. Two extensions are identified and proposed. The first is to extend SVA to utilize objects which encapsulate functionally related signals to facilitate higher abstraction checking, similar to the transaction-level scoreboard. The benefit is to leverage effective built-in sequence and property constructs for transaction-level checking. The second is to extend scoreboards to import internal DUT signals and objects for procedural checking. The benefit is to leverage procedural programming such as if, loop, function, tasks, classes, data structures, algorithms, and OOP on internal DUT checking. The implementation of the above ideas is investigated. The first extension cannot be implemented due to SystemVerilog's restriction of using dynamic types (e.g., classes, objects) in sequences and properties. However, a workaround is proposed based on the implementation of the second extension.
SystemVerilog coverage enables detailed measurements of constrained randomization, including inte... more SystemVerilog coverage enables detailed measurements of constrained randomization, including interested scenarios (i.e., binning). Coverage-driven verification (CDV) has been the methodology of choice for design verification. However, coverage distribution is often not considered an essential factor; coverage fulfillment is reactive, not proactive, and coverage results are available late at post-regression. This research investigates and proposes enhancement extensions for SystemVerilog coverage and CDV. Extensions of SystemVerilog coverage are proposed as followings: 1) Extend SystemVerilog coverage capability with probability density function (pdf) reporting. 2) Leverage and include coverage as constraints or coveragedriven constraints 3) Provide probability-based and pre-regression coverage reporting capability, including covergroups, cover assertions, and code coverage. Additionally, coverage predictions of covergroups can be provided pre-DUT. With the above coverage extensions, CDV should apply the principle of coverage first, Coverage-First Verification (CFV), not during or near the end of the verification process. Coverage of covergroups and cover assertions should be implemented, collected, and computed early during or after the verification planning before DUT availability and time-consuming checkers (i.e., SVAs, and scoreboards). This shift allows the discovery and remedy of coverage holes early. This research also explores potential implementations of the above extensions.
Constrained randomization is performed in transactions (i.e., sequence item) at a fine-grained si... more Constrained randomization is performed in transactions (i.e., sequence item) at a fine-grained signal-level control and in sequences at a course-grained control (e.g., rand mode, constraint mode). As a result, most randomization engineering and coding are centralized within a transaction type and operated at runtime with one-size-fits-all complexity that is susceptible to error and undermining reuse and scalability. This study presents Polymorphic Transaction-Level Constraint, where constrained randomization is additionally layered and distributed at a higher abstraction level. Specifically, polymorphism is applied to allow constrained randomization at the transaction type level. Instead of one type, multiple transaction types collaborate at runtime in the constrained randomization mechanics, allowing constraint encapsulation for better reuse and extensibility.
SystemVerilog adds randomization and constraint features to enable constrained random stimuli gen... more SystemVerilog adds randomization and constraint features to enable constrained random stimuli generation, significantly improving verification productivity over directed testing. As designs increase in size and complexity, more engineering resources are spent creating sophisticated constraints extended over time and space. Coordination of constrained randomization to create interesting sequential inter-interface scenarios (e.g., in SoCs verification) is challenging and timeconsuming. This study proposes a solution to address the complexity of constrained randomization associated with timespace extension. The solution is architected at the language level, extending SystemVerilog randomization and constraint features: 1) Randomization (i.e., rand) is extended beyond class variables to include class functions and class types. 2) Constraints are extended temporally, similar to Sys-temVerilog sequence and property capability. 3) Constraint expressions are extended to include internal dut variables. These new features can be implemented best by updating the SystemVerilog language or creating extension packages similar to the UVM, Python's NumPy, or C++'s Vector.
Polymorphism is the ability of objects of different classes related by inheritance to respond dif... more Polymorphism is the ability of objects of different classes related by inheritance to respond differently to the same member function calls. Polymorphism has been used extensively in OOP, like Python and C++, to promote reuse and extensibility. Polymorphism would enhance testbench dynamics allowing multiple dynamic, on-the-fly, and runtime swappings of testbench objects (e.g., sequence) and components (e.g., drivers), realizing dynamic testbench. Specifically, knobs can be adjusted at runtime to randomize not only object instantiations (the existing way) but also object and component types, facilitating behavioral and functional dynamics. This study investigates polymorphism in SystemVerilog and UVM. Since SystemVerilog provides virtual function capability and allows assignments of base class handles to derived class handles, polymorphism can be implemented in SystemVerilog. On the hand, UVM promotes a factory where objects (e.g., sequences) and components can be overridden (i.e., at build phase) without recompilation but not during time-consuming transaction exchanging execution (i.e., at run phase). As a result, objects and components can not be dynamically swapped during runtime (i.e., polymorphism). This study investigates polymorphism implementations in SystemVerilog and UVM.
This research proposes algorithms for the automation of coverage closure, which is one of the two... more This research proposes algorithms for the automation of coverage closure, which is one of the two main design verification tasks. The automation aims for the closure of code coverage and functional coverage, including covergroup and cover assertion. The automation for covergroup closure is fulfilled by creating new constraints from missing coverpoints or crosses. The automation of code coverage and cover assertions coverage closure comprises a backward propagation path analysis to form sequences of functional conditions (i.e., RTL expressions) from coverage holes to DUT inputs and a forward propagation path analysis to produce sequences of stimuli satisfied coverage hole sequences. The former is straightforward, but the latter is computing intensive due to deep coverage hole sequences, large input space, and complex RTL designs. Two techniques are proposed for generating stimuli sequences to fill coverage holes. The reverse logic analysis expands coverage hole sequences with correspondent RTL logics and reversely computes stimuli sequences. The trial-and-error analysis utilizes stimuli trial-anderror iterations combined with randomization save and restore, and piecewise reverse logic analysis.
Beyond UVM is an attempt to peek into the future for the successor of UVM, or UVM-Advanced. Based... more Beyond UVM is an attempt to peek into the future for the successor of UVM, or UVM-Advanced. Based on a revolutionary envision of future DUTs and the correspondent DV needs and requirements, Beyond UVM capability is outlined. To meet nonlinear demands in time-to-market, functionality, performance, and scalability, future IC architectures are assumed to advance toward dynamic and vertical modularity, and configurability, enabled by multi-die interconnect and micro-IC technology. Beyond UVM would be designed to address the DUT agility and dynamicity, including virtualization, softwarization, massive IOs, control vs. data separation, and system elasticity. Specifically, Beyond UVM is capable of hierarchical scenario-level constrained randomization, DUT type specificity, control/data plane specificity, dynamic and on-the-fly DUT scalability, and configurability. This paper is a proactive step toward shaping the future DV methodology, which would be optimally based upon recommendations and possibly standardization from the design and DV community.
As IC design increases in size and complexity, so does design verification (DV) to meet the chall... more As IC design increases in size and complexity, so does design verification (DV) to meet the challenging requirements. On the other hand, DV engineering continuously faces increasingly tightening scheduling to shorten time-to-market. Increasing engineering and simulation resources is the typical solution to expedite the DV process. This paper investigates techniques to enhance DV productivity adopted from manufacturing operations and software development and operations (DevOps). The proposed methodology utilizes continuous integration (CI) to remove wasted time to achieve DV efficiency and productivity. Specifically, the DV CI methodology comprises seven operational steps: continuous testing, continuous coverage monitoring, early coverage, fail flagging, micro-updating, update prioritization, and regression automation. Various software CI/CD tools are suggested for DV CI integration.
2008 Ninth International Workshop on Microprocessor Test and Verification, 2008
Identifies redundant verifications in the brute-force simulation approach. Explores a methodology... more Identifies redundant verifications in the brute-force simulation approach. Explores a methodology to enhance verification efficiency. Randomized transactions are dynamically focused, selected based on DUT feedback and intrusively used to force DUT into states of interest. The methodology is applied in the verification of a DSP data cache.
This paper presents a Deep Learning (DL) based methodology to predict the Doppler shift in mobile... more This paper presents a Deep Learning (DL) based methodology to predict the Doppler shift in mobile communications. MATLAB simulations conducted on single-carrier, LTE and 5G systems, in the Doppler shift range of [0, 50, …, 550] Hz, attain a prediction accuracy at ~95%, a normalized mean squared error (nMSE) between [10−2, 10−3] and a prediction latency equal to the duration of a sequence of five inputs (e.g. frames, subframes and slots, respectively). Simulations are performed on random payloads (e.g., non data aided - NDA), time-domain and frequency-domain signals embedded with variable modulation types [QPSK, 16QAM, 64QAM], delay profiles (i.e., LTE's [EPA EVA ETU]), and signal-to-noise ratios (SNRs) [-10, 20] dB. The methodology utilizes a hybrid model of a Convolutional Neural Network (CNN) and a Long Short Term Memory (LSTM) and two techniques to enhance prediction accuracy, namely, input diversity and binary prediction.
This paper proposes Deep Learning (DL) for Signal Processing, with reviews and discussions of the... more This paper proposes Deep Learning (DL) for Signal Processing, with reviews and discussions of the three recent DL application advancements in wireless communication, which predict channel profile, Doppler shift, and signal-to-noise ratio (SNR) of LTE and 5G systems. MATLAB simulations are performed on time-domain and frequency-domain signals, emulating real wireless environments, with randomized payloads (e.g., non-data-aided), modulation types [QPSK, 16QAM, 64QAM], Doppler shifts [0, 50, .., 550] Hz, and SNRs [-10, 20] dB. The predictions are accurate at ~95%. The methodology consists of input diversity to empower multiple inputs per prediction, binary prediction to reduce prediction complexity and uncertainty, and a hybrid DL convolutional neural network and long-short-term-memory (CNN-LSTM) model to learn features in every input and across inputs. Additionally, the paper presents common lessons learned and future research directions. The designed methodology provides an effective...
2020 8th International Conference on Wireless Networks and Mobile Communications (WINCOM)
Deep learning (DL) is applied to predict signal-to-noise ratio (SNR) in de facto LTE and 5G syste... more Deep learning (DL) is applied to predict signal-to-noise ratio (SNR) in de facto LTE and 5G systems in a non-data-aided (NDA) manner. Various channel conditions and impairments are considered, including modulation types, path delays, and Doppler shifts. Both time-domain and frequency-domain signal grids are evaluated as inputs for SNR prediction. A combination of convolutional neural network (CNN) and long short term memory (LSTM) - CNN-LSTM - is used as the SNR predictor. Learning both spatial and temporal features is known to improve DL prediction accuracy. Techniques employed to enhance performance are SNR range/resolution manipulation, binary prediction, and multiple input prediction. Computer simulation is conducted using MATLAB LTE, 5G, and DL toolboxes to generate OFDM signals, model fading channels with AWGN noise, and construct CNN-LSTM. Simulation results show, with off-line training, DL based prediction of SNR in LTE and 5G systems has better accuracy and latency than traditional estimation techniques. Specifically, SNR prediction for SNR range of [-4, 32] dB and resolution of 2 dB utilizing time-domain signals has an accuracy of 100%, hence normalized mean square error (NMSE) of zero, and a latency of 1 millisecond or less.
SystemVerilog does not support multiple inheritance, function overloading, and class template, st... more SystemVerilog does not support multiple inheritance, function overloading, and class template, standard features in other OOP languages like Python and C++. Inheritance is the backbone of SystemVerilog and UVM testbench to enable horizontal and vertical reuse. Multiple inheritance is more reusable than single inheritance, allowing a subclass to inherit properties and methods from multiple superclasses. Function overloading and class template are nice-to-have but less useful in testbench design. Duplicating codes can provide the means to achieve multiple inheritance, function overloading, and class template. This research proposes two workarounds based on class composition and polymorphism without code duplication.
SystemVerilog assertions (SVA) and scoreboards have been effective checking instruments in design... more SystemVerilog assertions (SVA) and scoreboards have been effective checking instruments in design verification at signal and transaction level abstractions, respectively. SVAs work on internal DUT signals, checking protocols, and scoreboards work on DUT I/O transactions, checking DUT functional specifications. These two mechanisms are independently complimenting each other. However, they can leverage each other strengths to enhance overall checking productivity. This research investigates potential extensions of the two techniques. Two extensions are identified and proposed. The first is to extend SVA to utilize objects which encapsulate functionally related signals to facilitate higher abstraction checking, similar to the transaction-level scoreboard. The benefit is to leverage effective built-in sequence and property constructs for transaction-level checking. The second is to extend scoreboards to import internal DUT signals and objects for procedural checking. The benefit is to leverage procedural programming such as if, loop, function, tasks, classes, data structures, algorithms, and OOP on internal DUT checking. The implementation of the above ideas is investigated. The first extension cannot be implemented due to SystemVerilog's restriction of using dynamic types (e.g., classes, objects) in sequences and properties. However, a workaround is proposed based on the implementation of the second extension.
SystemVerilog coverage enables detailed measurements of constrained randomization, including inte... more SystemVerilog coverage enables detailed measurements of constrained randomization, including interested scenarios (i.e., binning). Coverage-driven verification (CDV) has been the methodology of choice for design verification. However, coverage distribution is often not considered an essential factor; coverage fulfillment is reactive, not proactive, and coverage results are available late at post-regression. This research investigates and proposes enhancement extensions for SystemVerilog coverage and CDV. Extensions of SystemVerilog coverage are proposed as followings: 1) Extend SystemVerilog coverage capability with probability density function (pdf) reporting. 2) Leverage and include coverage as constraints or coveragedriven constraints 3) Provide probability-based and pre-regression coverage reporting capability, including covergroups, cover assertions, and code coverage. Additionally, coverage predictions of covergroups can be provided pre-DUT. With the above coverage extensions, CDV should apply the principle of coverage first, Coverage-First Verification (CFV), not during or near the end of the verification process. Coverage of covergroups and cover assertions should be implemented, collected, and computed early during or after the verification planning before DUT availability and time-consuming checkers (i.e., SVAs, and scoreboards). This shift allows the discovery and remedy of coverage holes early. This research also explores potential implementations of the above extensions.
Constrained randomization is performed in transactions (i.e., sequence item) at a fine-grained si... more Constrained randomization is performed in transactions (i.e., sequence item) at a fine-grained signal-level control and in sequences at a course-grained control (e.g., rand mode, constraint mode). As a result, most randomization engineering and coding are centralized within a transaction type and operated at runtime with one-size-fits-all complexity that is susceptible to error and undermining reuse and scalability. This study presents Polymorphic Transaction-Level Constraint, where constrained randomization is additionally layered and distributed at a higher abstraction level. Specifically, polymorphism is applied to allow constrained randomization at the transaction type level. Instead of one type, multiple transaction types collaborate at runtime in the constrained randomization mechanics, allowing constraint encapsulation for better reuse and extensibility.
SystemVerilog adds randomization and constraint features to enable constrained random stimuli gen... more SystemVerilog adds randomization and constraint features to enable constrained random stimuli generation, significantly improving verification productivity over directed testing. As designs increase in size and complexity, more engineering resources are spent creating sophisticated constraints extended over time and space. Coordination of constrained randomization to create interesting sequential inter-interface scenarios (e.g., in SoCs verification) is challenging and timeconsuming. This study proposes a solution to address the complexity of constrained randomization associated with timespace extension. The solution is architected at the language level, extending SystemVerilog randomization and constraint features: 1) Randomization (i.e., rand) is extended beyond class variables to include class functions and class types. 2) Constraints are extended temporally, similar to Sys-temVerilog sequence and property capability. 3) Constraint expressions are extended to include internal dut variables. These new features can be implemented best by updating the SystemVerilog language or creating extension packages similar to the UVM, Python's NumPy, or C++'s Vector.
Polymorphism is the ability of objects of different classes related by inheritance to respond dif... more Polymorphism is the ability of objects of different classes related by inheritance to respond differently to the same member function calls. Polymorphism has been used extensively in OOP, like Python and C++, to promote reuse and extensibility. Polymorphism would enhance testbench dynamics allowing multiple dynamic, on-the-fly, and runtime swappings of testbench objects (e.g., sequence) and components (e.g., drivers), realizing dynamic testbench. Specifically, knobs can be adjusted at runtime to randomize not only object instantiations (the existing way) but also object and component types, facilitating behavioral and functional dynamics. This study investigates polymorphism in SystemVerilog and UVM. Since SystemVerilog provides virtual function capability and allows assignments of base class handles to derived class handles, polymorphism can be implemented in SystemVerilog. On the hand, UVM promotes a factory where objects (e.g., sequences) and components can be overridden (i.e., at build phase) without recompilation but not during time-consuming transaction exchanging execution (i.e., at run phase). As a result, objects and components can not be dynamically swapped during runtime (i.e., polymorphism). This study investigates polymorphism implementations in SystemVerilog and UVM.
This research proposes algorithms for the automation of coverage closure, which is one of the two... more This research proposes algorithms for the automation of coverage closure, which is one of the two main design verification tasks. The automation aims for the closure of code coverage and functional coverage, including covergroup and cover assertion. The automation for covergroup closure is fulfilled by creating new constraints from missing coverpoints or crosses. The automation of code coverage and cover assertions coverage closure comprises a backward propagation path analysis to form sequences of functional conditions (i.e., RTL expressions) from coverage holes to DUT inputs and a forward propagation path analysis to produce sequences of stimuli satisfied coverage hole sequences. The former is straightforward, but the latter is computing intensive due to deep coverage hole sequences, large input space, and complex RTL designs. Two techniques are proposed for generating stimuli sequences to fill coverage holes. The reverse logic analysis expands coverage hole sequences with correspondent RTL logics and reversely computes stimuli sequences. The trial-and-error analysis utilizes stimuli trial-anderror iterations combined with randomization save and restore, and piecewise reverse logic analysis.
Beyond UVM is an attempt to peek into the future for the successor of UVM, or UVM-Advanced. Based... more Beyond UVM is an attempt to peek into the future for the successor of UVM, or UVM-Advanced. Based on a revolutionary envision of future DUTs and the correspondent DV needs and requirements, Beyond UVM capability is outlined. To meet nonlinear demands in time-to-market, functionality, performance, and scalability, future IC architectures are assumed to advance toward dynamic and vertical modularity, and configurability, enabled by multi-die interconnect and micro-IC technology. Beyond UVM would be designed to address the DUT agility and dynamicity, including virtualization, softwarization, massive IOs, control vs. data separation, and system elasticity. Specifically, Beyond UVM is capable of hierarchical scenario-level constrained randomization, DUT type specificity, control/data plane specificity, dynamic and on-the-fly DUT scalability, and configurability. This paper is a proactive step toward shaping the future DV methodology, which would be optimally based upon recommendations and possibly standardization from the design and DV community.
As IC design increases in size and complexity, so does design verification (DV) to meet the chall... more As IC design increases in size and complexity, so does design verification (DV) to meet the challenging requirements. On the other hand, DV engineering continuously faces increasingly tightening scheduling to shorten time-to-market. Increasing engineering and simulation resources is the typical solution to expedite the DV process. This paper investigates techniques to enhance DV productivity adopted from manufacturing operations and software development and operations (DevOps). The proposed methodology utilizes continuous integration (CI) to remove wasted time to achieve DV efficiency and productivity. Specifically, the DV CI methodology comprises seven operational steps: continuous testing, continuous coverage monitoring, early coverage, fail flagging, micro-updating, update prioritization, and regression automation. Various software CI/CD tools are suggested for DV CI integration.
Uploads
Papers by thinh ngo
Drafts by thinh ngo