Hard-to-detect faults in high-stakes applications such as automotive, data center, and high-performance computing create a critical coverage gap that cannot be easily addressed through conventional structural testing alone, and modifying existing DFT architectures to capture these remaining faults typically involves substantial costs, increased silicon area, and potential performance impacts. This paper presents a closely integrated solution combining functional fault grading with traditional testing methods. This unified strategy strengthens test coverage and enables design verification and DFT teams to work in parallel.
Chip developers face challenges in achieving high defect coverage due to complex design architectures, limited observability and controllability, the presence of analog/mixed-signal and third-party IP blocks, constraints on scan insertion, gaps between functional and structural testing, limitations of ATPG tools, and trade-offs between test time, cost, and coverage.
Functional fault grading helps address the challenges of achieving high defect coverage by identifying faults that are incidentally detected during functional testing, especially in areas where structural ATPG struggles—such as in analog/mixed-signal blocks, third-party IPs, or logic with limited observability. It complements structural testing by revealing hidden coverage gaps and improving overall fault detection without requiring additional scan logic or test patterns.
The DFT methodology described in this paper incorporates a sophisticated multi-stage process where an ATPG tool first generates initial structural patterns, followed by a simulation engine producing functional patterns (from RTL or GL simulation). A fault simulator then performs fault grading on the remaining untestable faults, with the ATPG tool ultimately merging structural and functional coverage data. This unified approach ensures comprehensive coverage while optimizing test efficiency.