For low-latency ML inference, Xilinx delivers leadership throughput and power efficiency. In standard benchmark tests on GoogleNet V1, Xilinx U250 delivers more than 4x the throughput of the fastest GPU at real-time inference. Learn more in the whitepaper: “Accelerating DNNs with Xilinx Alveo Accelerator Cards”.
xDNNv3 will be available in November 2018, in the ML Suite. Get started today with the ML Suite featuring xDNNv2 from the links below.Learn More
* See White Paper for performance details
ML Inference performance leadership with CNN pruning technology.
Optimization/Acceleration Complier Tools
reVISION Knowledge Center
Documentation, Resources, Papers, Tutorials for Edge InferenceLearn More
Acceleration Knowledge Center
Documentation, Resources, Papers, Tutorials for Cloud InferenceLearn More
Xilinx University Program
Enabling the use of Xilinx technologies for academic teaching and researchLearn More