video

Why you Need HLS for Machine Learning Accelerators

Inferencing is one of the most computationally demanding algorithms in embedded systems today. As more systems incorporate machine learning, delivering a fast and efficient implementation is critical to your success. While there are numerous inferencing accelerators on the market, achieving the highest level of performance and efficiency requires developing a bespoke accelerator. This session will describe how you can use High-Level Synthesis (HLS) to go from Python to synthesizable RTL to deploy a tailored inferencing accelerator. HLS enables you to explore architectural alternatives, quantization options, and power/performance tradeoffs in a way that is not practical in traditional RTL flows. This session will consider the differences in the characteristics of configurable inferencing IP as compared to a bespoke accelerator developed with HLS, demonstrating the benefits of using HLS for developing bespoke inferencing accelerators.

Sdílení

Související zdroje informací