Vitis™ AI is a comprehensive AI inference development platform on Xilinx devices, boards, and Alveo™ data center acceleration cards. It consists of a rich set of AI models, optimized deep-learning processor unit (DPU) cores, tools, libraries, and example designs for AI on edge and data center ends. It is designed with high efficiency and ease of use in mind, unleashing the full potential of AI acceleration on Xilinx FPGAs and adaptive SoCs.
AI model zoo is open to all users with rich and off-the-shelf deep learning models from the most popular frameworks such as Pytorch, Tensorflow, Tensorflow 2, and Caffe. AI model zoo provides optimized and retrainable AI models that enable faster deployment, performance acceleration, and productization on all Xilinx platforms.
With world-leading model compression technology, the AI optimizer reduces model complexity by 5X to 50X with minimal accuracy impact. Deep compression takes the performance of your AI inference to the next level.
The AI quantizer can reduce the computing complexity without losing prediction accuracy by converting the 32-bit floating-point weights and activations to fixed-point like INT8. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.
The AI compiler maps the AI model to a high-efficient instruction set and data flow. It also performs sophisticated optimizations such as layer fusion, instruction scheduling, and reuses on-chip memory as much as possible.
The performance profiler allows programmers to perform in-depth analysis of the efficiency and utilization of the AI inference implementation.
The Vitis AI Library is a set of high-level libraries and APIs built for efficient AI inference with DPU cores. It is built based on the Vitis AI runtime (VART) with unified APIs and provides easy-to-use interfaces for the AI model deployment on Xilinx platforms.
The framework inference flow WeGO offers a straightforward path from training to inference by leveraging the native Pytorch and Tensorflow frameworks to deploy DPU un-supported operators to CPU, thus greatly speed up the models’ deployment and evaluation over cloud DPU.
Adaptable Domain Specific Architecture (DSA) to match fast evolving AI algorithms of CNN, RNN, and NLP with industrial leading performance on Xilinx Zynq® SoC, Zynq UltraScale+™ MPSoC, Alveo data center accelerator cards, and Versal® ACAP.
With Vitis AI, developers achieve efficient AI computing on edge applications such as IoT, automated driving and ADAS, medical imaging, and video analytics. Vitis AI delivers powerful computing performance with best-in-class algorithms for edge devices while keeping flexibility in deployment with optimal power consumption.
Get hands-on with Vitis AI and choose the Xilinx edge platform and embedded partners:
Empowered by Vitis AI, Xilinx Alveo™ data center accelerator cards offer you the industry-leading AI inference performance for different workloads of CNN, RNN, and NLP. The out-of-box, on-premise AI solutions are designed to meet the needs of ultra-low latency, higher throughput, and flexibility in modern data centers, providing higher computing capabilities over CPUs and GPUs and lower TCO.
Install Vitis AI and setup your Alveo Acceleration Cards:
Xilinx FPGAs are now broadly accessible to all developers everywhere through public cloud service providers like Amazon AWS and Microsoft Azure. With Vitis AI, developers easily improve performance by cloud AI acceleration and build your own application.
With Vitis™ AI, it is now possible to achieve real-time processing with the 3D perception AI algorithm on embedded platforms. The co-optimization from hardware and software speed up delivers leading performance of the state-of-art PointPillars model on Zynq® UltraScale+™ MPSoC.
Latency determines the decision-making for autonomous driving cars when running at high speeds and encountering obstacles. With an innovated domain-specific accelerator and software optimization, Vitis AI empowers autonomous driving vehicles to process deep learning algorithms with ultra-low latency and higher performance.
With strong scalability and adaptability to fit across many low-end to high-end ADAS products, Vitis AI delivers industry-leading performance supporting popular AI algorithms for object detection, lane detection and segmentation in the front ADAS, and In-cabin or surround-view systems.
Cities are increasingly employing intelligence-based systems at the edge point and cloud end. The massive data generated every day requires a powerful end-to-end AI analytics system in order to quickly detect and process objects, traffic, and face behavior. This adds valuable insight to each frame from edge to cloud.
Learn more about Xilinx in Machine & Computer Vision >
In this webinar, go in depth with the key components of Vitis AI and learn how to achieve adaptable and efficient AI inference on Xilinx hardware platforms.
In this Webinar, learn to use Vitis AI to deploy and run your pre-trained DNN models to Xilinx’s embedded SoC and Alveo acceleration platforms. Then get started with using Vitis AI to run examples on the board.
Learn how to leverage Xilinx MPSoCs with Vitis in order to implement AI Camera designs.
In this webinar, we will show how Vitis and Vitis AI enable developers to accelerate the whole application on Xilinx platforms.
Chris Anderson chats with Quenton Hall of Xilinx about how developers can leverage ZYNQ FPGAs in Edge AI appliances.