White Paper

2020 Wilson Research Group functional verification study

FPGA functional verification trend report

2020 Wilson Research Group functional verification study: FPGA functional verification trend report

This report presents the results from the 2020 Wilson Research Group Functional Verification Study focused on the Field-Programmable Gate Array (FPGA) segment. The findings from this study provide invaluable insight into the state of today’s FPGA market in terms of both design and verification trends.

Introduction

This report presents field-programmable gate array (FPGA) functional verification trends based on the 2020 Wilson Research Group functional verification study. While multiple studies focused on general IC/ASIC functional verification trends have been published, [1, 2, 3, 4, 5] to our knowledge, our 2018 study was the first to specifically focus on FPGA functional verification trends.[6] Our 2020 study builds on our previous studies by pro-viding the latest industry trends.

A. The global FPGA semiconductor market

The 2019 global semiconductor market was valued at $385.4 billion after experiencing a 15 percent decline due to a 32 percent drop in the memory IC market, which is expected to recover in 2021.[7] The FPGA portion of the semiconductor market is valued at about $5 billion.[8] The FPGA semiconductor market is expected to reach a value of $7.5 billion by 2030, growing at a compounded annual growth rate (CAGR) of 4.4 percent during this forecast period. The growth in this market is being driven by new and expanding end-user applications related to data center computing, networking, and storage, as well as communication.

Historically, FPGAs have offered two primary advantages over ASICs. First, due to their low NRE,[9] FPGAs are generally more cost effective than IC/ASICs for low-volume production. Second, FPGAs’ rapid prototyping capabilities and flexibility can reduce the development schedule since a majority of the verification and validation cycles have traditionally been performed in the lab. More recently, FPGAs offer advantages related to performance for certain accelerated applications by exploiting hardware parallelism (e.g., AI Neural Networks).

The IC/ASIC market in the mid- to late-2000 timeframe underwent growing pains to address increased verification complexity. Similarly, we find today’s FPGA market is being forced to address growing verification complexity. With the increased capacity and capability of today’s complex FPGAs and the emergence of high-performance SoC programmable FPGAs (e.g., Xilinx Zynq® UltraSCALE+, Intel® Stratix®, and Microsemi SmartFusion®2), traditional lab-based approaches to FPGA verification and validation are becoming less effective. In this report, we quantify the ineffectiveness of today’s FPGA verification processes in terms of non-trivial bug escapes into production.

B. Study background

The study results presented in this report are a continuation of a series of industry studies on functional verification. This series includes the previously published 2012, 2014, 2016, and 2018 Wilson Research Group Functional Verification Study.[3, 4, 5, 6] Each of these stud-ies was modeled after the 2002 and 2004 Collett International Research, Inc. studies[1, 2] and focus on the IC/ASIC market. While we began studying the FPGA market in 2012, we waited until we had sufficient multi-year data points to identify verification trends before formally publishing the findings.

For the purpose of our study, a randomized sampling frame was constructed from multiple acquired industry lists. This enabled us to cover all regions of the world and all relevant electronics industry market segments. It is important to note that we did not include our own account team’s customer list in the sampling frame. This was done in a deliberate attempt to prevent vendor bias in the final results. While we architected the study in terms of questions and then compiled and analyzed the final results, we commissioned Wilson Research Group to execute our study. After data cleaning the results to remove inconsistent, incomplete, or random responses, the final sample size consisted of 1492 eligible participants (i.e., n = 1492).

Share

Conteúdo informativo relacionado