Hero Slide Images

Data Center AI

Delivering the highest throughput at the lowest latency for image and natural language processing, speech recognition, and recommendation engines.

Highest Compute Efficiency & Optimal Performance

The data center is going through a significant transformation. With intelligent connected devices in our homes, cars, offices, factories, cities, and in the cloud, the cost for this proliferation of AI enabled applications is an exponential increase in the data-processing and energy efficiency requirements placed on the chips powering these devices. The challenge is not just how to deploy the AI model, but how to deploy the AI application most efficiently. The best implementation of an AI application doesn’t need to be the fastest, it needs to be the most efficient, yet remain flexible. AMD XDNA architecture built on adaptive computing platforms provide the best of both worlds for AI inference workloads – from cloud, edge or endpoint.​

Acceleration Cards for Your AI Workload

Alveo™ V70 Development Card

AI Inference​

The Alveo™ V70 accelerator card is the first low-profile AMD Alveo production card leveraging XDNA™ architecture with AI Engines. Providing low power and a small form factor, the V70 helps reduce cost per AI channel and provides high channel density for video applications allowing you to meets your demanding AI performance requirements. The card also comes with industry standard framework support, directly compiling models trained in TensorFlow and PyTorch.


Alveo™ U55C high performance compute card

Recommendation

The Alveo™ U55C high performance compute card provides optimized acceleration for workloads for recommendation,  high performance computing (HPC), big data analytics, and machine learning .The U55C card packs in high bandwidth memory (HBM2) and 200Gbps of high-speed networking into a single slot, small form factor card, and is designed for deployment in any server.​


Alveo™ U30 accelerator card

Video Processing

The Alveo™ U30 accelerator card provides high channel density, low cost per channel, and low power consumption for live video streaming workloads. When coupled with Video Software Developers Kit (SDK) from AMD the solution provides a production-ready platform for live streaming providers to benefit from AMD hardware acceleration with standards-based FFmpeg, GStreamer and APIs.


Software to Power Your Data Center Applications
Vitis AI

Vitis™ AI provides a comprehensive AI inference development platform for AMD adaptive SoCs and Alveo™ data center accelerators providing standard framework support, directly compiling models trained in TensorFlow and PyTorch . Vitis AI plugs into common software developer tools and utilizes a rich set of optimized open-source libraries to empower software developers with machine learning acceleration as part of their software code.

Get Started

Order Alveo V70 Development Card​

The Alveo V70 early access development card is now available

Learn More >

Contact Product Specialist ​

Have questions about AMD Data Center AI Solutions and hardware? Fill out the following from and a product specialist will contact you.​

Contact Specialist >