Skip to Main Content
white paper

Improving verification predictability and efficiency using big data

Questa Verification IQ - enabling new technologies.

Big data is a term that has been around for decades. It was initially defined as data sets captured, managed, and processed in a tolerable amount of time beyond the ability of normal software tools. The only constant in big data’s size over this time is that it’s been a moving target driven by improvements in parallel processing power and cheaper storage capacity. Today most of the industry uses the 3V model to define the challenges and opportunities of big data as three dimensional: volume, velocity, and variety. Lately this has been expanded to machine learning and digital footprints. The list of applications is endless, the process is the same – capture, process and analyze. Why shouldn’t this technology help improve your verification process efficiency and predict your next chip sign-off?

Today’s verification environments must be collaborative due to the size of devices, geographically dispersed teams, and pressures on time to market. It requires the efficient use of every cycle, managing hardware, software, and personnel resources.

This paper will define the typical verification environment and the data that it often leaves uncaptured across the duration of a project. It will show how the process of capture, process, and analyze can be applied to improve predictability and efficiency of the whole verification process. This requires a flexible infrastructure that allows data to be extracted from the multiple systems that make up the typical verification flow. There must be a central repository that is able to store the data in a common way, so that data can be managed to keep it clean and relevant not only over the project’s duration but also into the future to allow comparisons and predictions on other and new projects.

This paper will explain how new web-ready technologies can be applied to the hardware development flow to provide a plug-n-play infrastructure for collaboration. It will also highlight some of the types of analysis and insights that are possible by combining common coverage metrics with those normally lost data metrics, as well as inter-relationships between those metrics.

The ability to see gathered metrics over time can provide great insights into the process. Historical coverage data trended over time alone can give indications of how much more time is needed to complete sign-off. Being able to plot these single metrics together on the same graph also opens information that is often lost.

This paper will also show with examples other insights that can be gained by looking at cross-analytics between bug closure rates and source code churn, which when combined with coverage metrics can help predict progress towards sign-off. It will show how historical data can be used to spot similar patterns of events, and how recording a little more meta-data within existing systems allows cross-analytics to figure out information like how effective a new methodology or tool has been based on past projects. It also allows calculation of further metrics available from the data, such as mean time between bug fixes, mean time between regression failures, the last time a test passed or failed, and tool license use by specific users – allowing us to answer and predict questions like “Will we have enough formal licenses for our peak usage on the next project?”

Share