We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!


AI Inference Overview

Xilinx provides a comprehensive Hardware and Software solution to enable AI Inference. The diagram below shows the high level components.

Model Zoo
Implementation Software
Data Center
Data Center
Hardware Overlay (DSA)



  • Xilinx adaptable hardware can accelerate most AI/ML models when deployed in inference applications
  • The model is optimized by Xilinx’s tools which work with industry standard frameworks
  • Once optimized, the model works with Xilinx runtime and driver software, with optimized portions running in Xilinx hardware
  • Migration is designed to be straightforward and requires no hardware expertise or FPGA programming experience



  • Xilinx’s AI Optimization tools provide Deep Neural Network (DNN) pruning and quantization and other optimization capabilities to reduce the size of the model with minimal impact on accuracy.
  • Typically models are quantized to 16-bit or 8-bit for inference implementation, although custom precision can be used depending on exact application.

Supported AI Frameworks

The Xilinx AI Platform supports a number of industry-standard frameworks, highlighted in the table below.

Framework Description Data Center AI Platform Support Edge AI Platform Support
TensorFlow is an open-source framework developed by Google.
CAFFE is an open-source framework developed at UC Berkeley.
MXNet is an open-source framework developed by Apache.  
Darknet is an open-source framework developed by Joseph Redmon.
Keras is an open source high-level API capable of running on top of several other frameworks.  
Onnx is an open-source graph model and standardized operator definition. It works in conjunction with several frameworks. It was created by Facebook and Microsoft. Coming Soon  


The Xilinx AI Platform supports many AI/ML Models supported as shown below. We are continually working to bring the latest models into our platform.

Application Task Algorithm Data Center AI Platform Support Edge AI Platform Support
General Image classification Googlenetv1, Resnet50, Resnet101, Resnet152 Inception v1, BN-inception, VGG16, SqueezeNet, Mobilenet , MobilenetV2 ✔ (Subset) ✔ (Subset)
Object Detection MobilnetV2-SSD, SSD, YOLO v2, YOLO v3, Tiny YOLO v2, Tiny YOLO v3 ✔ (Subset)
Segmentation ENet, ESPNet  
 Face Face detection SSD, Densebox  
Landmark Localization Coordinates Regression  
Face recognition ResNet + Triplet / A-softmax Loss  
Face attributes recognition Classification and regression  

Pedestrian Detection SSD  
Pose Estimation Coordinates Regression  

Video Analytics
Object detection SSD, RefineDet  
Pedestrian Attributes Recognition GoogleNet Coming Soon
Car Attributes Recognition GoogleNet Coming Soon
Car Logo Recognition Modified Densebox + GoogleNet Coming Soon
License Plate Detection Modified DenseBox Coming Soon
License Plate Recognition GoogleNet + Multi-task Learning Coming soon

Object Detection SSD, YOLOv2, YOLOv3  
Lane Detection VPGNet  
Semantic Segmentation FPN  
Data Center

Data Center AI Platform

The Data Center AI Platform Supports industry-standard frameworks

You can bring your own trained model or start with one from our model zoo

Xilinx ML suite provides comprehensive optimization for optimal FPGA implementation, together with a runtime and hardware DSA

Target a Xilinx Alveo accelerator card, your own custom card, or FPGA-as-a-Service such as Amazon AWS


AI Edge Platform

The Xilinx Edge AI Platform provides comprehensive tools and models which utilize unique deep compression and hardware-accelerated Deep Learning technology.

The platform provides efficient, convenient and economical inference deployments for embedded-CPU-based FPGAs.

The Xilinx AI team consists of renowned researchers and experienced professionals known for their pioneering work in  the field of deep learning.

Page Bookmarked