白皮书

Paradigms of large language model applications in functional verification

Four practices to ensure the quality of LLM outputs

blue circuit board with centered chip for web

This paper presents a comprehensive literature review for applying large language models (LLM) in multiple aspects of functional verification. Despite the promising advancements offered by this new technology, it is essential to be aware of the inherent limitations of LLMs, especially hallucination that may lead to incorrect predictions. To ensure the quality of LLM outputs, four safeguarding paradigms are recommended. Finally, the paper summarizes the observed trend of LLM development and expresses optimism about their broader applications in verification.

Paradigms of LLM for functional verification

Language models are arguably the most essential types of machine learning (ML) models used for functional verification. This process involves handling numerous forms of textual data, including specifications, source code, test plans, testbenches, logs, and reports. Most of the textual content comprises natural languages, controlled natural languages, or programming languages. Therefore, effective use of language models is critical for the application of AI/ML in functional verification.

Despite these promising advancements offered by this new technology, it is essential to be aware of the inherent limitations of Large Language Models (LLM) that lead to incorrect predictions. In particular, we caution against using raw outputs of LLMs directly in verification.

To counter the limitations and deliver on their promise, the authors recommend four safeguarding paradigms to ensure the quality of LLM outputs:

  1. Quality gate/guardrail
  2. Self-check feedback loop
  3. External agent
  4. Chain-of-thought

分享

相关资源