Xilinx is now part ofAMDUpdated Privacy Policy

Adaptable and Real-Time
AI Inference Acceleration

Overview

Optimal Artificial Intelligence Inference from Edge to Cloud

Vitis™ AI is a comprehensive AI inference development platform on Xilinx devices, boards, and Alveo™ data center acceleration cards. It consists of a rich set of AI models, optimized deep-learning processor unit (DPU) cores, tools, libraries, and example designs for AI on edge and data center ends. It is designed with high efficiency and ease of use in mind, unleashing the full potential of AI acceleration on Xilinx FPGAs and adaptive SoCs. 

Vitis AI Deployment Features

How your development works with Vitis AI

  • Supports mainstream frameworks and the latest models capable of diverse deep learning tasks, CNN, RNN, and NLP
  • Provides a comprehensive set of pre-optimized AI models that are ready to deploy on Xilinx devices. You can find the closest model and start re-training for your applications
  • Provides a powerful open-source AI quantizer that supports pruned and unpruned model quantization, calibration, and fine-tuning.
  • Provides a user-friendly compilation and deployment flow to meet customer-defined models and operators
  • Provides layer by layer analysis to help with bottlenecks by AI profiler
  • Offers the AI library with open-source high-level C++ and Python APIs for maximum portability from edge to cloud
  • Provides a smooth solution by whole graphic optimizer (WeGO) to deploy Pytorch and Tensorflow models on cloud DPU by integrating Vitis AI within the frameworks
  • Offers efficient, scalable, and customizable DPU IP cores, achieving different needs such as throughput, latency, power, and lower precision

Explore All the Possibilities with Vitis AI

Vitis AI Model Zoo

AI Model Zoo

AI model zoo is open to all users with rich and off-the-shelf deep learning models from the most popular frameworks such as Pytorch, Tensorflow, Tensorflow 2, and Caffe. AI model zoo provides optimized and retrainable AI models that enable faster deployment, performance acceleration, and productization on all Xilinx platforms. 


AI Optimizer

With world-leading model compression technology, the AI optimizer reduces model complexity by 5X to 50X with minimal accuracy impact. Deep compression takes the performance of your AI inference to the next level.

Artificial Intelligence Optimizer Block Diagram

Artificial Intelligence Quantizer Block Diagram

AI Quantizer

The AI quantizer can reduce the computing complexity without losing prediction accuracy by converting the 32-bit floating-point weights and activations to fixed-point like INT8. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.


AI Compiler

The AI compiler maps the AI model to a high-efficient instruction set and data flow. It also performs sophisticated optimizations such as layer fusion, instruction scheduling, and reuses on-chip memory as much as possible.

Artificial Intelligence Compiler Block Diagram

AI Profiler

The performance profiler allows programmers to perform in-depth analysis of the efficiency and utilization of the AI inference implementation.


AI Library

The Vitis AI Library is a set of high-level libraries and APIs built for efficient AI inference with DPU cores. It is built based on the Vitis AI runtime (VART) with unified APIs and provides easy-to-use interfaces for the AI model deployment on Xilinx platforms.

Artificial Intelligence Library Block Diagram

Whole Graphic Optimizer (WeGO) Block Diagram

Whole Graphic Optimizer (WeGO)

The framework inference flow WeGO offers a straightforward path from training to inference by leveraging the native Pytorch and Tensorflow frameworks to deploy DPU un-supported operators to CPU, thus greatly speed up the models’ deployment and evaluation over cloud DPU.


Deep-Learning Processor Unit (DPU)

Adaptable Domain Specific Architecture (DSA) to match fast evolving AI algorithms of CNN, RNN, and NLP with industrial leading performance on Xilinx Zynq® SoC, Zynq UltraScale+™ MPSoC, Alveo data center accelerator cards, and Versal® ACAP.

Xilinx DPU Block Diagram
Deployment
kria-board

Edge Deployment

With Vitis AI, developers achieve efficient AI computing on edge applications such as IoT, automated driving and ADAS, medical imaging, and video analytics. Vitis AI delivers powerful computing performance with best-in-class algorithms for edge devices while keeping flexibility in deployment with optimal power consumption.

Get hands-on with Vitis AI and choose the Xilinx edge platform and embedded partners:


alveo

On-Premise Deployment

Empowered by Vitis AI, Xilinx Alveo™ data center accelerator cards offer you the industry-leading AI inference performance for different workloads of CNN, RNN, and NLP. The out-of-box, on-premise AI solutions are designed to meet the needs of ultra-low latency, higher throughput, and flexibility in modern data centers, providing higher computing capabilities over CPUs and GPUs and lower TCO.

Install Vitis AI and setup your Alveo Acceleration Cards:


1592258132635

Cloud Deployment

Xilinx FPGAs are now broadly accessible to all developers everywhere through public cloud service providers like Amazon AWS and Microsoft Azure. With Vitis AI, developers easily improve performance by cloud AI acceleration and build your own application.

Documentation

Vitis AI Documentation

Default Default Title Document Type Date
Solutions

Empowering Autonomous Driving and ADAS Technologies

Real-Time Multi-Class 3D Object Detection

Real-Time Multi-Class 3D Object Detection

With Vitis™ AI, it is now possible to achieve real-time processing with the 3D perception AI algorithm on embedded platforms. The co-optimization from hardware and software speed up delivers leading performance of the state-of-art PointPillars model on Zynq® UltraScale+™ MPSoC.

View Video  >


Ultra-Low Latency Application for Autonomous Driving

Latency determines the decision-making for autonomous driving cars when running at high speeds and encountering obstacles. With an innovated domain-specific accelerator and software optimization, Vitis AI empowers autonomous driving vehicles to process deep learning algorithms with ultra-low latency and higher performance. 

Learn More about Xilinx in AD  >

Ultra-Low Latency Application for Self-Driving Cars

Object Detection & Segmentation

Object Detection & Segmentation

With strong scalability and adaptability to fit across many low-end to high-end ADAS products, Vitis AI delivers industry-leading performance supporting popular AI algorithms for object detection, lane detection and segmentation in the front ADAS, and In-cabin or surround-view systems. 

Learn More about Xilinx in ADAS  >


Making Cities Smarter and Safer

Video Analytics

Cities are increasingly employing intelligence-based systems at the edge point and cloud end. The massive data generated every day requires a powerful end-to-end AI analytics system in order to quickly detect and process objects, traffic, and face behavior. This adds valuable insight to each frame from edge to cloud.

Learn more about Xilinx in Machine & Computer Vision >

Video Analytics

Transforming the Power of AI to Improve Health

Acceleration COVID-19 Image Detection

AI in Imaging, Diagnostics and Clinical Equipment

Vitis AI offers powerful tools and IPs to uncover and identify hidden patterns from medical image data to help fight against disease and improve health.  

Learn more about Xilinx in Healthcare AI > 


AI On-Premise and in the Data Center

Datacenter Acceleration

With the explosion of internet applications, complicated AI-based products and services like image and video processing, live broadcast, receommendation engines, and natural language processors, it has put forward higher requirements on processing capabilities on data center acceleration platforms. Vitis AI delivers higher AI inferencing performance with higher throughput and efficiency on Xilinx Alveo cards and customer platforms, meeting user expectations for fast-evolving AI for data centers and the cloud. 

Learn More about Xilinx in Data Center >
 
Artificial Intelligence Optimizer Block Diagram
Video

Videos


Webinars

Adaptable AI Inference with Vitis AI

Adaptable AI Inference with Vitis AI

In this webinar, go in depth with the key components of Vitis AI and learn how to achieve adaptable and efficient AI inference on Xilinx hardware platforms.
 

Vitis AI Deep Dive

Vitis AI Deep Dive

In this Webinar, learn to use Vitis AI to deploy and run your pre-trained DNN models to Xilinx’s embedded SoC and Alveo acceleration platforms. Then get started with using Vitis AI to run examples on the board.

Accelerating AI Camera Development with Xilinx Vitis

Accelerating AI Camera Development with Xilinx Vitis

Learn how to leverage Xilinx MPSoCs with Vitis in order to implement AI Camera designs.

 

Whole Application Acceleration: Designing an AI-enabled System

Whole App Acceleration

In this webinar, we will show how Vitis and Vitis AI enable developers to accelerate the whole application on Xilinx platforms.

 

DPU-PYNQ for Python-Powered Edge AI Appliances | Tech Chats - Xilinx and Mouser Electronics

DPU-PYNQ for Python-Powered Edge AI Appliances

Chris Anderson chats with Quenton Hall of Xilinx about how developers can leverage ZYNQ FPGAs in Edge AI appliances.