We’re excited to officially release Vitis™ AI 1.4, the powerful machine learning development platform for AI inference acceleration on Xilinx Adaptive Computing platforms. This release provides users with a complete solution stack that supports, for the first time, our latest 7nm Versal™ ACAP platforms and the 16nm-based Kria™ portfolio of adaptive system-on-modules (SOMs). The Versal platform support includes the Versal AI Core Series VCK190 Evaluation Kit and the VCK5000 Versal Development Card for AI Inference.
The VCK190 kit is the first Versal AI Core series evaluation kit, enabling designers to develop solutions using AI and DSP engines capable of delivering over 100X greater compute performance than today's server-class CPUs. The VCK190 kit is an ideal platform supporting high throughput AI inference and signal processing applications from cloud to edge.
Versal AI Core Series VCK190 Evaluation Kit
The VCK5000 Versal development card targets designs requiring high throughput AI inference and signal processing compute performance. The VCK5000 development platform provides an out-of-the-box solution for cloud acceleration and edge computing applications with no prior FPGA hardware expertise required.
VCK5000 Versal Development Card for AI Inference
Together with the Kria KV260 Vision AI Starter Kit, these new AI development platforms provide more possibilities for users to achieve superior AI inference performance, scalability and cloud-to-edge deployment options in AI productization.
Kria KV260 Vision AI Starter Kit
To allow more users to realize the benefit of high-efficient AI inference acceleration, we provide a complete set of AI models, which are optimized, retrainable, deployable and free to download by everyone. In Vitis AI 1.4, the diversity of this AI Model Zoo has been increased to include state-of-the-art models for 4D radar detection, image-lidar sensor fusion, surround view 3D detection, depth estimation, super resolution and many more, totaling hundreds of models from different ML frameworks.
Of course, in order to maintain differentiated product features and be competitive, some users choose to directly deploy custom neural network models. Vitis AI 1.4 is also providing a smoother experience for these users by introducing new deployment APIs called graph-runner, making the custom layers plug-and-play on DPU and CPU no longer a story.
Xilinx FPGAs, adaptive SoCs and ACAPs provide a hardware architecture that can adapt to the computing needs of different scenarios and the flexibility and customizability to fit different algorithm topologies, precisions and even memory hierarchies. This will be the best way to win the battle of AI productization and is also what lead to the creation of domain-specific architectures (DSA).
To demonstrate the high-efficiency of DSA in AI inference acceleration, we submitted a ResNet50 closed-division benchmark in the MLPerf Inference v1.0 results earlier this year. MLPerf Inference v1.0 is the MLCommons organization's machine learning inference performance benchmark suite. The results measured how quickly a trained neural network can process new data for a wide range of applications on a variety of form factors. We achieved a result of 5,921 images per second (img/s) using the Versal VCK5000 PCI-E card. It outperformed the acceleration performance result achieved by a Nvidia T4 card under the same mode.
Additionally, the performance results of the Versal AI Core series VCK190 proves our ability in AI acceleration. By leveraging both AI Engine (AIE) cores and DPU design, optimized memory hierarchy and bandwidth IO, the VCK190 achieved a ResNet50-v1.5 result of 1,567 img/s and 7.6ms latency, which is 87% higher performance and 19x lower latency than Nvidia AGX Xavier. The best part, with Vitis AI 1.4, VCK5000 and VCK190 accelerations are now generally accessible to every user.
We’ve always been committed to providing easier to use software tools that allow more users engaged in software development, data and AI science development, and embedded development to achieve AI deployment easily on our adaptive computing platforms.
With Vitis AI 1.4, the quantizer, optimizer and compiler tools have all extended the support for the most popular machine learning frameworks, Pytorch, Tensorflow 1.x, Tensorflow 2.x and Caffe. New APIs and operator features have been introduced to enable more AI models deployment across multiple devices.
Since we first released Vitis AI 1.0 in January 2020, it has been downloaded more than 100,000 times and adopted by hundreds of customers for AI inference acceleration. It’s been used by AI developers worldwide to create many exciting projects. Now, with the Vitis AI 1.4 release and the many new features and models, developers can do even more on the new Versal AI Core series and Kria SOM platforms.