Editor’s Note: This content is contributed by Manuel Uhm, Director, Silicon Marketing at Xilinx
I get asked this question a lot in my interactions with customers. Okay, maybe not phrased exactly like that, but more or less along the same lines: “Why should I move to Versal™ ACAP and is now the right time to do so?” It’s a great question, and the answer is easy… “It depends.” Okay, maybe not so easy after all! To be fair, there are many considerations including the design requirements/resources, how much of the design can leverage the tremendous amount of hard IP in Versal ACAPs, library and soft IP availability, silicon availability, and production timelines, etc. So the answer really does vary depending on the factors at play. We’ll converse on a number of these topics in future blog posts, but today I wanted to focus a bit more on the question of “why Versal” and provide a specific customer example.
Recognize the Value of All the Hard IP
When discussing the why of Versal ACAP, it’s important to recognize the value of all the hard IP, including commonly used infrastructure including memory controllers, PCIe®, multirate Ethernet, and the programmable network on chip(NoC), which reduces the need for routing in the adaptable engines or programmable logic. Some Versal series also include AI Engines (a new type of vector processor well-suited for both advanced signal processing and ML algorithms), high bandwidth memory, direct RF, and high-speed cryptography. One of the main reasons that Xilinx hardened all that IP was a recognition years ago that Moore’s Law was coming to an end, hence smaller transistors alone couldn’t provide the performance increase and power decrease that customers have come to expect at every new process node. The diagram below illustrates the value of the hard IP in the Versal AI Core series, which you can see adds up to significant savings in LUTs and power compared to our extremely successful 16nm UltraScale+™ products. In the Versal AI Core VC1902 device, this amounts to a potential savings of 3.6M LUTs! Of course, most designs won’t be able to take advantage of all the hard IP, but a substantial decrease in LUTs and power should be realized when a comparable design is ported to a Versal ACAP, and that will have the additional benefit of faster place-and-route times and more design iterations in a day.
<Value Integrated Shell & Hard in Versal AI Core Series>
Advanced Signal Processing with AI Engines
Today I’d like to focus on a bit more on the AI Engines. The AI Engines are an array of tens to hundreds (up to 400 in the largest AI Core series device, the VC1902) of small VLIW SIMD vector processors optimized for mathematical functions such as linear algebra and matrix math. When people hear the term “AI Engine,” they naturally think of artificial intelligence. However, these functions form many of the basic building blocks for a number of advanced signal processing algorithms, such as beamforming and massive MIMO, as well as machine learning inference algorithms, such as CNNs for image classification. For this reason, the AI Engines support both complex and real data types, for signal processing and inference, respectively. One of the key target applications for the AI Engines is signal processing for 5G wireless systems. The AI Engines are able to provide 5X more compute density and 50% lower power for the beamforming processing of a 64 antenna system supporting 200MHz of instantaneous bandwidth2. This is what piqued the interest of Keysight, a leading 5G test and measurement equipment provider. In addition, the development productivity was also a key consideration for them. They certainly have experts capable of programming hardware in the adaptable engines but with AI Engines compile times of minutes compared to hardware place-and-route times of hours, they were able to be far more productive in the design and development of 5G wireless algorithms, with several more design iterations per week. But as you can see from the block diagram of their implementation (see below), the system-level value of Versal ACAP was critical because they were able to combine both the power and productivity of the AI Engines with the flexibility of the adaptable engines to create a compelling EVM demo.
<EVM measures Tx Rx performance with IQ constellation analysis>
This is just one example of a customer seeing tremendous benefits from embracing the Versal ACAP architecture, and the AI Engines in particular. There are many more customers programming the AI Engines on the VC1902 for applications from 5G, data center inference, prototyping for edge inference, radar, cable access, and many more.
If you’re interested in learning more about the AI Engines in the Versal AI Core series, please read this white paper and watch the Keysight video to hear more on the value of AI Engines directly from them!
Original Date: 08-10-2020