Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3649476.3660378acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Open access

Assert-O: Context-based Assertion Optimization using LLMs

Published: 12 June 2024 Publication History

Abstract

Modern computing relies on System-on-Chips (SoCs), integrating IP cores for complex functions. However, this integration introduces vulnerabilities, necessitating rigorous hardware security validation. The effectiveness of this validation depends on the security properties embedded in the SoC. Recent studies explore large language models (LLMs) for generating security properties, but these may not be directly optimized for validation. Manual intervention remains necessary to reduce their number. Security validation methods that rely on human expertise are not scalable as they are time-intensive and prone to human error. In order to address these issues, we introduce Assert-O, an automated framework designed to derive security properties from SoC documentation and optimize the generated properties. It also ranks the properties based on the security vulnerabilities they are associated with, thereby streamlining the validation process. Our method leverages hardware documentation to initially create security properties, which are subsequently consolidated and prioritized based on their level of criticality. This approach serves to expedite the validation procedure. Assert-O is trained on documentation of six IPs from OpenTitan. To evaluate our proposed method, Assert-O was assessed on five other modules from OpenTitan. Assert-O was able to generate 183 properties, which was further optimized to reduce them to 138 properties. Subsequently, these properties were ranked based on their impact on the security of the overall system.

References

[1]
Baleegh Ahmad 2023. Fixing Hardware Security Bugs with Large Language Models. arxiv:2302.01215 [cs.CR]
[2]
Ebtesam Almazrouei 2023. The Falcon Series of Language Models: Towards Open Frontier Models. (2023).
[3]
Jason Blocklove 2023. Chip-Chat: Challenges and Opportunities in Conversational Hardware Design. arXiv preprint arXiv:2305.13243 (2023).
[4]
Yupeng Chang 2023. A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109 (2023).
[5]
Ghada Dessouky 2019. { HardFails} : Insights into { Software-Exploitable} Hardware Bugs. In 28th USENIX Security Symposium (USENIX Security 19). 213–230.
[6]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[7]
Jean Kaddour 2023. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169 (2023).
[8]
Rahul Kande 2023. LLM-assisted Generation of Hardware Assertions. arxiv:2306.14027 [cs.CR]
[9]
Mingjie Liu 2023. Chipnemo: Domain-adapted llms for chip design. arXiv preprint arXiv:2311.00176 (2023).
[10]
Peter Mell 2007. A complete guide to the common vulnerability scoring system version 2.0. In Published by FIRST-forum of incident response and security teams, Vol. 1. 23.
[11]
Xingyu Meng 2023. Unlocking Hardware Security Assurance: The Potential of LLMs. arxiv:2308.11042 [cs.CR]
[12]
OpenTitan [n. d.]. Documentation | OpenTitan. https://opentitan.org/documentation/index.html
[13]
Hammond Pearce 2022. Pop Quiz! Can a Large Language Model Help With Reverse Engineering?arxiv:2202.01142 [cs.SE]
[14]
Hammond Pearce 2023. Examining Zero-Shot Vulnerability Repair with Large Language Models. In 2023 IEEE Symposium on Security and Privacy (SP). 2339–2356. https://doi.org/10.1109/SP46215.2023.10179324
[15]
Gabriel Poesia 2022. Synchromesh: Reliable code generation from pre-trained language models. arXiv preprint arXiv:2201.11227 (2022).
[16]
Konstantinos I Roumeliotis and Nikolaos D Tselikas. 2023. Chatgpt and open-ai models: A preliminary review. Future Internet 15, 6 (2023), 192.
[17]
Dipayan Saha 2023. Llm for soc security: A paradigm shift. arXiv preprint arXiv:2310.06046 (2023).
[18]
Amisha Srivastava 2023. SCAR: Power Side-Channel Analysis at RTL-Level. arxiv:2310.06257 [cs.CR]
[19]
Shailja Thakur 2023. Benchmarking Large Language Models for Automated Verilog RTL Code Generation. In 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE). 1–6. https://doi.org/10.23919/DATE56975.2023.10137086
[20]
Hasini Witharana 2022. A survey on assertion-based hardware verification. ACM Computing Surveys (CSUR) 54, 11s (2022), 1–33.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '24: Proceedings of the Great Lakes Symposium on VLSI 2024
June 2024
797 pages
ISBN:9798400706059
DOI:10.1145/3649476
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2024

Check for updates

Author Tags

  1. Hardware Security
  2. Hardware Verification
  3. Large Language Models

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Technology Innovation Institute (TII)

Conference

GLSVLSI '24
Sponsor:
GLSVLSI '24: Great Lakes Symposium on VLSI 2024
June 12 - 14, 2024
FL, Clearwater, USA

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 175
    Total Downloads
  • Downloads (Last 12 months)175
  • Downloads (Last 6 weeks)54
Reflects downloads up to 06 Oct 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media