Xilinx is now part ofAMDUpdated Privacy Policy

Adaptable and Real-Time
AI Inference Acceleration

Overview

Optimal Artificial Intelligence Inference from Edge to Cloud

The Vitis™ AI platform is a comprehensive AI inference development solution for AMD devices, boards, and Alveo™ data center acceleration cards. It consists of a rich set of AI models, optimized deep learning processor unit (DPU) cores, tools, libraries, and example designs for AI at the edge and in the data center. It is designed with high efficiency and ease of use in mind, unleashing the full potential of AI acceleration on Xilinx FPGAs and adaptive SoCs. 

Vitis AI Deployment Features

Figure 1 - Vitis AI Structure

How your development works with the Vitis AI platform:

  • Support for mainstream frameworks and the latest models capable of diverse deep learning tasks, such as CNNs, RNNs, and NLPs
  • Powerful quantizer and optimizer tools for optimal model accuracy and processing efficiency
  • Easy compilation flow and high-level APIs to achieve the fastest deployment of custom models
  • Highly efficient and configurable DPU cores to meet different needs for throughput, latency, and power at the edge and in the cloud

Explore All the Possibilities with Vitis AI

Vitis AI Model Zoo

Figure 2 - Model Zoo

AI Model Zoo

AI model zoo is open to all users with rich and off-the-shelf deep learning models from the most popular frameworks such as Pytorch, Tensorflow, Tensorflow 2, and Caffe. AI Model Zoo provides optimized and retrainable AI models that enable faster execution, performance acceleration, and production on AMD platforms.


AI Optimizer

With exceptional model compression technology, the AI optimizer reduces model complexity by 5X to 50X with minimal accuracy impact. Deep compression takes the performance of your AI inference to the next level.

Artificial Intelligence Optimizer Block Diagram

Figure 3 - Vitis AI Optimizer


Artificial Intelligence Quantizer Block Diagram

Figure 4 - Vitis AI Quantizer

AI Quantizer

A completed process of custom operator inspection, quantization, calibration, fine-tuning, and converting floating-point models into fixed-point models that requires less memory bandwidth - providing faster speed and higher computing efficiency.


AI Compiler

The AI compiler maps the AI model to a highly efficient instruction set and data flow. It also performs sophisticated optimizations, such as layer fusion and instruction scheduling, and reuses on-chip memory as much as possible.

Artificial Intelligence Compiler Block Diagram

Figure 5 - Vitis AI Compiler


Artificial Intelligence Profiler Block Diagram

Figure 6 - Vitis AI Profiler

AI Profiler

The performance profiler allows programmers to perform in-depth analysis of the efficiency and utilization of the AI inference implementation.


AI Library

The Vitis AI Library is a set of high-level libraries and APIs built for efficient AI inference with DPU cores. It is built based on the Vitis AI Runtime (VART) with unified APIs and provides easy-to-use interfaces for AI model deployment on AMD platforms.

Artificial Intelligence Library Block Diagram

Figure 7 - Vitis AI Library


Whole Graphic Optimizer (WeGO) Block Diagram

Figure 8 - Vitis AI Compiler

Whole Graphic Optimizer (WeGO)

The WeGO framework inference flow offers a straightforward path from training to inference by leveraging native TensorFlow or PyTorch frameworks to deploy DPU unsupported operators to the CPU—greatly speeding up model deployment and evaluation over cloud DPUs.


Deep-Learning Processor Unit (DPU)

Deep Learning Processor Unit (DPU)

The DPU is an adaptable domain-specific architecture (DSA) that matches the fast-evolving AI algorithms of CNNs, RNNs, and NLPs with the industry-leading performance found on Zynq™ SoCs, Zynq UltraScale+™ MPSoCs, Alveo data center accelerator cards, and Versal ACAPs.

Xilinx DPU Block Diagram

Figure 9 - Vitis AI DPU

Deployment
kria-board

Edge Deployment

The Vitis™ AI platform delivers powerful computing performance with the optimal algorithms for edge devices while allowing for  flexibility in deployment with optimal power consumption. It brings higher computing performance for popular edge applications for automotive, industrial, medical, video analysis, and more.

Browse AMD and partner edge platforms >


alveo

On-Premise Deployment

Empowered by the Vitis AI solution, Alveo™ data center accelerator cards offer competitive AI inference performance for different workloads on CNNs, RNNs, and NLPs. The out-of-the-box, on-premise AI solutions are designed to meet the needs of the ultra low-latency, higher throughput, and high flexibility requirements found in modern data centers—providing higher computing capabilities over CPUs, GPUs and lower TCO.

Choose your accelerator card >


1592258132635
vmaccel-gray-tile.png

Cloud Deployment

Working with public cloud service providers such as AWS and VMAccel, AMD now offers remote access to FPGA and Versal™ ACAP cloud instances for quickly getting started on model deployments—even without local hardware or software.

Documentation

Vitis AI Platform Documentation

Extensive documentation support is available for developing with the Vitis™ AI platform on models, tools, deep learning processor units, etc.

Link to specific documents below, or visit the Documentation Portal to see all the Vitis AI Platform documents.

Default Default Title Document Type Date
Solutions

Empowering Autonomous Driving and ADAS Technologies

Real-Time Multi-Class 3D Object Detection

Real-Time Multi-Class 3D Object Detection

With Vitis™ AI, it is now possible to achieve real-time processing with the 3D perception AI algorithm on embedded platforms. The co-optimization from hardware and software speed up delivers leading performance of the state-of-art PointPillars model on Zynq® UltraScale+™ MPSoC.

View Video  >


Ultra-Low Latency Application for Autonomous Driving

Latency determines the decision-making for autonomous driving cars when running at high speeds and encountering obstacles. With an innovated domain-specific accelerator and software optimization, Vitis AI empowers autonomous driving vehicles to process deep learning algorithms with ultra-low latency and higher performance. 

Learn More about Xilinx in AD  >

Ultra-Low Latency Application for Self-Driving Cars

Object Detection & Segmentation

Object Detection & Segmentation

With strong scalability and adaptability to fit across many low-end to high-end ADAS products, Vitis AI delivers industry-leading performance supporting popular AI algorithms for object detection, lane detection and segmentation in the front ADAS, and In-cabin or surround-view systems. 

Learn More about Xilinx in ADAS  >


Making Cities Smarter and Safer

Video Analytics

Cities are increasingly employing intelligence-based systems at the edge point and cloud end. The massive data generated every day requires a powerful end-to-end AI analytics system in order to quickly detect and process objects, traffic, and face behavior. This adds valuable insight to each frame from edge to cloud.

Learn more about Xilinx in Machine & Computer Vision >

Video Analytics

Transforming the Power of AI to Improve Health

Acceleration COVID-19 Image Detection

AI in Imaging, Diagnostics and Clinical Equipment

Vitis AI offers powerful tools and IPs to uncover and identify hidden patterns from medical image data to help fight against disease and improve health.  

Learn more about Xilinx in Healthcare AI > 


AI On-Premise and in the Data Center

Datacenter Acceleration

With the explosion of internet applications, complicated AI-based products and services like image and video processing, live broadcast, receommendation engines, and natural language processors, it has put forward higher requirements on processing capabilities on data center acceleration platforms. Vitis AI delivers higher AI inferencing performance with higher throughput and efficiency on Xilinx Alveo cards and customer platforms, meeting user expectations for fast-evolving AI for data centers and the cloud. 

Learn More about Xilinx in Data Center >
 
Artificial Intelligence Optimizer Block Diagram
Video

Featured Videos

All Videos

Default Default Title Date
Getting Started

Getting Started

Develop Using the Vitis AI Platform Locally

Step 1: Set up your hardware platform

Step 2: Download and install the Vitis AI environment from GitHub

Step 3: Run Vitis AI environment examples with VART and the AI Library

Step 4: Access tutorials, videos, and more


For more on Getting Started, click  the button below:

Develop Using the Vitis AI Platform in the Cloud

Develop accelerated applications with the Vitis AI platform in the cloud—no local software installation or upfront purchase of hardware platforms necessary (pay as you go). Log in and get started right away.


Training Courses


Deployment Options