UPGRADE YOUR BROWSER
We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!
For low-latency AI Inference, Xilinx delivers the highest throughput at the lowest latency. In standard benchmark tests on GoogleNet V1, The Xilinx Alveo U250 platform delivers more than 4x the throughput of the fastest existing GPU for real-time inference. Learn more in the whitepaper: Accelerating DNNs with Xilinx Alveo Accelerator Cards
AI Inference performance leadership with CNN pruning technology.
Optimization/Acceleration Compiler Tools
Achieves throughput using high-batch size. Must wait for all inputs to be ready before processing, resulting in high latency.
Achieves throughput using low-batch size. Processes each input as soon as it’s ready, resulting in low latency.
Optimized hardware acceleration of both AI inference and other performance-critical functions by tightly coupling custom accelerators into a dynamic architecture silicon device.
This delivers end-to-end application performance that is significantly greater than a fixed-architecture AI accelerator like a GPU; because with a GPU, the other performance-critical functions of the application must still run in software, without the performance or efficiency of custom hardware acceleration.
Adaptable silicon allows Domain-Specific Architectures (DSAs) to be updated,
optimizing the latest AI models without needing new silicon
Fixed silicon devices are not optimized for the latest models due to long development cycles
The Xilinx Edge AI Platform is available on Xilinx Zynq SoC and MPSoC Edge cards.
Learn MoreThe Xilinx Data Center AI Platform is available on a variety of Platforms including Xilinx Alveo accelerator cards and the Amazon AWS F1 FPGA instance.
Learn MoreEnabling the use of Xilinx technologies for academic teaching and research
Learn More