Hero Slide Images

Vitis AI 3.5 Release

Now Available

Download Now
Hero Slide Images

Why AMD AI

World‘s Most Advanced AI Acceleration from Edge to Data Center

Hero Slide Images

Highest AI Efficiency Card

VCK5000​ AI Development Card

Learn More

Why AMD AI

Highest Compute Efficiency & Optimal Performance

Optimal AI Inference Performance

World‘s most advanced AI acceleration from edge to data center. Highest AI inference performance, fastest experience & lowest cost.


datacenter-icon

AI for Data Center

Delivering the highest throughput at the lowest latency for cloud-end image processing, speech recognition, recommender system accelerations, and natural language process (NLP) accelerations

auto-icon

AI for Edge​

Superior AI inference capabilities to accelerate deep learning processing in self-driving cars, ADAS, healthcare, smart city, retail, robotics, and autonomous machines at the edge.

Special Limited Offer & AI Webinars

Data Center​ AI Acceleration

High-Throughput AI Inference

Performance icon

Highest Performance AI Inference

2X TCO reduction vs. mainstream GPUs

Performance icon

Highest Performance Video Analytics Throughput

2X number of video streams vs. mainstream GPUs

Productivity icon

Simple Learning Curve

Popular AI models and frameworks ​with no hardware programming required

learning-curve

Graph Sources: https://developer.Nvidia.com/deep-learning-performance-training-inference

AMD Data Center AI Case Studies

Get Started with AMD Data Center AI Solutions

xilinx-vck5000-hp-tile-500x250
aupera-mip-logo-updated
xilinx-vitis-ai-logo-purple

Purchase VCK5000

Purchase the VCK5000 Development Card for AI inference built on the AMD 7nm Versal adaptive SoC

Try VCK5000 on the cloud

Compute AI inference at high performance with Mipsology and achieve full video processing ML inference pipeline for AI recognition with Aupera

Download Vitis AI

Get started with AMD AI solutions and download the Vitis™ AI development environment

EDGE AI ACCELERATION

Industry-Leading Edge AI Acceleration Performance​

Performance icon

Lowest Latency AI Inference

  • Optimal FPS and power consumption on Zynq™ UltraScale+and Versal™
  • Powerful deep learning processing units (DPU)​
  • State-of-the-art model optimization technologies; 5X to 50X model performance boost
tools-and-service

Flexible Software Flow ​

  • Support AI models from PyTorch, TensorFlow, and Caffe​
  • Easy C++ and Python-based libraries and APIs​
  • Unified quantizer, compiler, and runtime for deployment across edge platforms​
Productivity icon

Scalable & Adaptable

  • Scalable DPU IP for different logic and AIE resources​
  • Open AI model zoo for free-on-board try-on​
  • Whole Application Acceleration

 

Latency Response Comparison

cpu-gpu-latency

High Throughput OR Low Latency

Achieves throughput using high-batch size. Must wait for all inputs to be ready before processing, resulting in high latency.

fpga-acap-latency

High Throughput AND Low Latency

Achieves throughput using low-batch size. Processes each input as soon as it’s ready, resulting in low latency.

Scalability to Fit All Your Edge Products

application-acceleration

End-to-end application performance

Optimized hardware acceleration of both AI inference and other performance-critical functions is achieved by tightly coupling custom accelerators into a dynamic architecture silicon device.

This delivers end-to-end application performance that is significantly greater than a fixed-architecture AI accelerator. In such devices, the other performance-critical functions of the application must still run in software, without the performance or efficiency of custom hardware acceleration.

gpu-cpu
matched
Get Started with AMD Edge AI Solutions

kria-kv260-vision-ai-starter
vitis-ai-software-box
xilinx-app-store-kria-badge

Purchase Kria KV260 Vision AI Starter Kit

Built for advanced vision application development without requiring complex hardware design knowledge

Download Vitis AI

Achieve efficient AI computing on edge devices for your applications with Vitis AI

Visit App Store

Pre-built applications for Kria system-on-modules! Evaluate, purchase, & deploy accelerated applications!

Explore AMD Solutions fo Edge AI Inference

Developer Resources

xilinx-appstore-bg-tablet

Visit App Store

Evaluate, purchase, & deploy accelerated applications!

developer-tile

Developer Site 

Explore articles, projects, tutorials and more!

Medicine doctor and stethoscope in hand touching Ui and icon future of medical technology.

Stay Informed

Stay up to date with all AI Acceleration News