Metastability from the intermixing of multiple clock signals
Today’s SoC designs employ advanced multi-clocking architectures to meet the high-performance and low power requirements. Since metastability from the intermixing of multiple clock signals is not modeled by digital simulation, we must perform exhaustive, automated clock-domain crossing (CDC) analyses to identify and correct problem areas to avoid unpredictable behavior when the chip samples come back from the fab. Furthermore, given the breadth of end customers’ requirements, SoCs must support numerous configurations or “modes” for system start-up and configuration, BIST, end-customer use cases and interface combinations. Being able to satisfy the need to perform extensive CDC analysis, and the high operational flexibility of the end product, poses a significant design and verification challenge.
In addition, while many SoC operational modes share a baseline clock and data path configuration, commonly there are a significant number of modes that will have very different clock and data path configuration from the baseline and/or from each other. This means register paths that are “CDC safe” in a given mode can be a violation when the SoC is running in other legal modes. From EDA methodology perspective, there is another layer of complexity to address: because the SoCs are so large (nearly 1 billion gates), we must perform the CDC analysis and aggregate the results in a hierarchical manner.
Before the multi-mode analysis flow described above was adopted, users would manually setup independent CDC analysis runs for each configuration corresponding to each DUT mode. Setting up individual runs was a tedious and error prone process to set-up and corner-case configurations were easily missed. Additionally, because each run was independent of the other, designers must review the separate results for each configuration which multiplies the review time and effort. Although there was a high degree of overlap in the analysis between design configurations, designers would review the overlapping results multiple times. Finally, trying to manually merge together the massive amount of results from such repetitive, individual runs was also a tedious, error prone process.