Industry’s first fully software defined, fully hardware accelerated SmartNIC.
Built for HPC and Big Data applications, the Alveo U55C accelerator is the most powerful Alveo card ever from AMD.
Delivers compute, networking, and storage acceleration in an efficient 75-watt, small form factor, and armed with 100 GbE networking, PCIe Gen4, and HBM2. Designed to deploy in any server.
The Alveo U25N SmartNIC delivers a true convergence of network and security acceleration functions, including OVS and IPsec, into a single platform.
The Alveo U30 media accelerator card provides the industry’s highest channel density, lowest cost per channel, and lowest power consumption for live video streaming workloads.
Incredible compute, networking, and storage acceleration thanks to 890k LUTs, 5.9k DSP slices, 64GB of DDR4 memory, and dual 100Gbps network interfaces.
Offers 1.3M LUTs, 11.5k DSP slices, 64GB of DDR4 memory, dual 100Gbps network interfaces, and delivers 90x higher performance than CPUs on key workloads at a fraction of the costs.
Built for compute and memory bound workloads and is armed with 8GB of HBM2 + 32GB of DDR4 memory, 1.1M LUTs , 8.5k DSP slices, dual 100Gbps network interfaces, PCIe Gen4, and support for CCIX.
For trade execution and risk management, the Alveo X3 series offers low latency NICs for turnkey deployment as well as adaptable accelerator cards for custom Fintech solutions.
The VCK5000 development card is powered by Versal adaptive SoCs featuring AI Engines for ML inference and advanced signal processing and is designed for data center, 5G, radar, and other compute-intensive applications. The VCK5000 is a high-power development platform for CNN, RNN, and NLP acceleration for your cloud and edge applications.
The Alveo™ V70 accelerator card is the first low-profile AMD Alveo production card leveraging XDNA™ architecture with AI Engines. Providing low power and a small form factor, the V70 helps reduce cost per AI channel and provides high channel density for video applications allowing you to meets your demanding AI performance requirements. The card also comes with industry standard framework support, directly compiling models trained in TensorFlow and PyTorch
A quick start program enabling companies to accelerate products and services in the cloud or on-premises
Start developing on AMD Alveo accelerated nodes in the cloud
1 BlackLynx Elasticsearch on Alveo versus EC2 c4.8xlarge
2 Based on CAPEX & OPEX savings for DNN inference on Alveo accelerator cards vs dual-socket Intel Xeon Platinum servers
3 Source: Accelerating DNNs with Alveo Accelerator Cards White Paper
4 Measured on CNN+BLSTM Speech-to-Text ML inference against Nvidia P4