The AMD Alveo SN1000 is the industry’s first SmartNIC offering software-defined hardware acceleration for all function offloads in a single platform. SN1000 SmartNICs directly offload CPU-intensive tasks to optimize networking performance, with an architecture that can accelerate a broad range of custom offloads at line rate, including support for customer-built and third-party offloads.
Based on the AMD 16nm UltraScale+™ architecture, SN1000 SmartNICs are powered by the low-latency AMD XCU26 FPGA and a 16-core Arm® processor.
The SN1000 features full protocol-level offload acceleration customization, application-specific data paths, and the ease of P4 high-level language programming. Vitis Networking, P4 toolkit from AMD, enables customers to compose custom offloads and tweak existing offloads to handle new protocols and applications without replacing hardware.
SN1000 SmartNICs provide software-defined hardware acceleration for a wide range of networking, security, and storage offloads.
|Board Specifications||Alveo SN1000 SmartNIC Accelerator Cards|
|PCI Express||PCIe Gen 4 x8 or Gen 3 x16|
|Network Interfaces||2x100G QSFP28 DA copper or optical transceiver|
|Arm Processor||Discrete 16-core Cortex-A72 Processor|
|DDR Format||-1x 4GB x 72 DDR4-2400 (Arm® Processor) -2x 4GB x 72 DDR4-2400 (FPGA)|
|Full Duplex Throughput||200Gbps|
|Latency (1/2 RTT)||<3us|
|Flow Table Entries||4 M stateful connections|
|IPsec Encryption Throughput||100Gbps (AES-GCM)|
|Height||FHHL PCIe CEM 0.72 inch (18.3 mm)|
|Length||6.59 inch (167.5 mm)|
|Width||4.38 inch (111.15 mm)|
|Tunneling Offloads||VXLAN / NVGRE / Custom|
|Advanced Packet Filtering||Yes|
|Acceleration||TCPDirect – TCP/UDP, Open Virtual Switch (OVS), Virtio-net, vDPA, DPDK, Onload™, Virtio-blk, Ceph RBD Client Offload|
|PMCI Protocols||NC-SI, PLDM Monitoring and Control, PLDM MCTP|
|PMCI Transports||MCTP SMBus, MCTP PCIe VDM|
|Power and Thermal|
|Maximum Total Power||75W|
|Software and FPGA Extensibility via Dynamically Loadable Plugins||Yes|
|Vitis Developer Environment||Yes|
Onload™ dramatically accelerates and scales network-intensive applications such as in-memory databases, software load balancers, and web servers. With Onload, data centers can support 400% or more users on their cloud network while delivering improved reliability, enhanced quality of service (QoS) and higher return on investment, without modification to existing applications.
Onload allows data centers to realize:
Onload delivers a return on capex investments by allowing data centers to redeploy 25% or more of their load-balancing servers for other tasks. Alternatively, data centers can attain a reduction in operational expenses (opex) by shrinking the overall server footprint.
|Use Case||Application(s)||Performance Increase||Benchmark Documents|
|In-Memory Databases||Couchbase, Memcached, Redis||100%|
|Software Load Balancers||NGINX Plus, HAProxy||400%|
|Web Servers/ Applications||NGINX Plus, Netty.io||50%|
Onload accelerates nearly all network-intensive TCP-based applications. Typical performance improvements when utilizing Onload include:
AMD has produced a series of cookbooks that outline the servers we used, the configuration done, and exactly what testing was completed. The purpose of these cookbooks is to enable customers to reproduce the results obtained, as sometimes they can appear somewhat remarkable.
To obtain a copy, contact us.
Onload is built from the same I/O software technology that powers nearly every financial market and high-frequency trading application on the planet. POSIX compliant, Onload ensures compatibility with TCP-based applications, management tools, and network infrastructures. In addition, Onload provides RDMA-like performance without requiring a forklift upgrade to the data center’s network infrastructure and can be deployed across x86-based platforms running Linux – bare metal, virtual machine or container.