UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

AI Inference
Acceleration

4x Faster than GPUs

Accelerating DNNs
White Paper

Learn More

ML Suite
Developer Lab

Get Started

Get started with Xilinx
Machine Learning on AWS

Learn More

Fastest Real-Time Inference

For low-latency ML inference, Xilinx delivers leadership throughput and power efficiency.  In standard benchmark tests on GoogleNet V1, Xilinx U250 delivers more than 4x the throughput of the fastest GPU at real-time inference.  Learn more in the whitepaper: “Accelerating DNNs with Xilinx Alveo Accelerator Cards”.

xDNNv3 will be available in November 2018, in the ML Suite. Get started today with the ML Suite featuring xDNNv2 from the links below.

Learn More

* See White Paper for performance details

Whole Application Acceleration

Xilinx ML Suite

The Xilinx ML Suite enables developers to optimize and deploy accelerated ML inference.  It provides support for many common machine learning frameworks such as Caffe, MxNet and Tensorflow as well as Python and RESTful APIs. Suite components:

  • xfDNN compiler/optimizer – auto-layer fusing, memory optimization, and framework integration
  • xfDNN quantizer – improves performance with auto model-precision INT8 calibration
  • Platforms – deployable on-premise or through cloud services
  • Deployable on-premise or through cloud services: Amazon, Nimbix, Huawei, Alibaba Cloud, Baidu, Tencent

Xilinx ML Acceleration
on AWS

Get Started

Xilinx ML Acceleration
on Nimbix

Get Started

Data Center Acceleration
on Alveo

Get Started

Now Available On Demand:
ML Suite Developer Lab

Get Started

Work through this self-paced tutorial using the Xilinx ML Suite to deploy models for real-time inference on Amazon EC2 F1 FPGA instances. During this lab you will use Python APIs to accelerate your ML applications with Amazon EC2 F1 instances powered by Xilinx FPGAs.

Xilinx DeePhi Edge Platform

ML Inference performance leadership with CNN pruning technology.

  • 5X to 50X network performance optimization
  • Increases FPS and reduces power

Optimization/Acceleration Complier Tools

  • Supports networks from Tensorflow, Caffe,and MXNet
  • Compiles networks to optimized Xilinx MPSoC accelerator runtime

Get more information
about DeePhi

Learn More

Watch a Demo about
DeePhi Edge Platform

Watch Video

Join the DeePhi
Community Forum

Join Now

Resources


reVISION Knowledge Center

Documentation, Resources, Papers, Tutorials for Edge Inference

Learn More

Acceleration Knowledge Center

Documentation, Resources, Papers, Tutorials for Cloud Inference

Learn More

Xilinx University Program

Enabling the use of Xilinx technologies for academic teaching and research

Learn More
Page Bookmarked