Xilinx is now part ofAMDUpdated Privacy Policy

Explore AMD Xilinx Solutions at Embedded World

Join AMD Xilinx at Embedded World 2022 for a series of demonstrations showcasing our latest hardware and software innovations in ML/AI and acceleration for automotive and industrial applications and other markets.

Whether you are building a robot, or developing the next big innovation in ADAS, we’ll show you how our advanced technology and robust ecosystem can help you improve system performance and accelerate your development and deployment cycles.

Please visit us in Hall 3A, Stand 239 or reach out to your AMD Xilinx sales team to schedule a meeting.

Demos
Demo Title  Demo Description
8M Pixel, Mono Vision, Forward Looking Camera
  • PoC combines Xilinx’s Zynq® UltraScale+™ MPSoC, OmniVision’s 8M Pixel imager, and Motovis’ deep learning networks
  • Zynq UltraScale+ MPSoC: Scalable solution, adaptable AI, functional safety enabled, production-ready, and automotive qualified
  • Enables OEMs to Innovate Faster and Future-Proof their Forward Camera Designs
Dynamic Function eXchange (DFX) Framework
  • The multi-camera vision system demonstrates DFX-enabled feature swapping by reconfiguring parts of a continually operating programmable SoC chip. 
  • The fault-tolerant chip design also showcases Xilinx’s functional safety design methodologies for safety-critical applications. 
  • The demo design combines Xilinx DFX + Isolation Design flow (IDF), integrating SEM soft error mitigation IP core.
AI Acceleration of Driver Monitoring System (DMS)
  • PoC combines Xilinx's Zynq UltraScale+ MPSoC, Jungo’s CoDriver 2.6.0 AI In-Cabin Sensing, and Xilinx Vitis™ AI deep learning networks
  • Using Xilinx Vitis AI design tools in combination with programmable logic to achieve an optimal performance/Watt DMS implementation
  • This DMS implementation uses unified tool chains and can target lowest cost FPGAs for standalone operation or to be centralized to domain controllers alongside AD or IVI functions
TurtleBot3 Waffle with Kria™ SOMs
10GigE Vision Camera with Kria SOMs
  • 122 fps Sony IMX547 global shutter sensor
  • Framos SLVS-EC v2.0 IP ingests 2 x 5 Gbps
  • Processed in FPGA HW for lowest latency
  • Sensor-to-Image 10GigE Vision MAC to SFP+
  • Generic SFP+ NIC simplifies IPC configuration
Accelerated ROS 2 Packages on Kria SOMs
  • openVSLAM, perception stacks accelerated
  • >10X speedup over SW-only implementation
  • Works with Gazebo simulation or live camera
YOLOv5 real-time AI inference processing with multiple cameras
  • Zebra® by Mipsology CNN accelerator
  • Using the latest “You Only Look Once” Neural Network - YOLOv5
  • Portability and Scalability with Zebra CNN
Multi-task AI on Versal®
  • Running 4-channel real-time object detection, semantics segmentation, pose estimation and pedestrian/face detection simultaneously on singe Versal AI device
  • Using Vitis AI 2.0 and AIE-based DPU on VCK190 board
  • Flexible and scalable to integrate into your design with C++/Python API
Smart Model Select Application on Kria SOM ​
  • Modified version of the Smart Model Select example from the Vitis™ Video Analytics SDK (VVAS)
  • Select from four video input sources: video file, RTSP stream, USB camera, or MIPI camera (AR1335)
  • Utilize sixteen classification and object detection models
  • Available on Xilinx App Store with complete application tutorials available
Lucid Industrial Camera with ZU+ in InFO package
  • Triton Edge all-in-one edge computing camera
  • Zynq Ultrascale+ MPSoC (InFO package) 
  • Accelerated OpenCV function running on PYNQ / Jupyter
Steel Plate Defect Recognition by DFI
  • “Industrial Pi”, World’s First: 1.8" Board  
  • Ultra-small SBC to Give and Endure: Tailored for Industrial Application
  • Ubuntu-Certified: risk-free system updates and reduced software lead times for the IoT ecosystem
2U Industrial Box PC by ASUSIoT
  • Excellence Display output: 4 x 4K Display Ports
  • Bullet 2 Legacy options for industrial applications: GPIO ports, COM Ports dual Intel LAN                     
  • Rich expandability: with a PCIe x8 slot, two M.2 sockets and lockable DC-in connector
Digital Signage Player by iBase
  • iSMART intelligent energy-saving & Observer remote monitoring technologies
  • 4x HDMI 2.0 with independent Audio output support
  • Built-in hardware EDID emulation function with software setting mode
  • The Compact Fanless design
Industrial Laptop by Winmate
  • Powered by AMD Ryzen™ Embedded V2516 Processor
  • 13.3inch Rugged Laptops @2.1GHz , up to 3.95GHz
  • Fanless Cooling System, 13.3” 1920 x 1080 LED panel with direct optical bonding laptop, Anti-glare technology for sunlight readability
  • Flip design for quick switching between laptop and tablet modes
  • Dual battery with hot-swappable design for whole-day-work
  • Expansion slot supporting optional 2nd removable SSD and smart card reader
AMD Xilinx Talk

On Day Two of Embedded World, Prof Alok Gupta, AMD System Engineer, will join Ralph Grundler (Sr. Director Technical Marketing @ Flex Logix) and Russell Klein (Siemens EDA) in a HW Acceleration panel discussion, where he will discuss Deep Learning Inference. More details on this session are below.


Session Topic: Manycore Acceleration Beyond GPU Architecture for Deep Learning Inference – AI Engine Core Versal (HW Acceleration)
Location: Halle 6 Raum 8 | only on-site
Time: Jun 22, 2022, 13:45 - 15:30 GMT+2
Speakers: Ralph Grundler (Sr. Director Technical Marketing @ Flex Logix), Russell Klein (Siemens EDA), and Prof Alok Gupta (AMD System Engineer)

With Deep Learning algorithmic advances outpacing hardware advances, How do you ensure that algorithms of tomorrow are a good fit for existing AI chips under development? , Most of these AI chips are being designed for the AI algorithms of today, Given the rate and the magnitude of algorithm evolution, many of these AI chip designs may become obsolete even before their commercial releases. Algorithms of tomorrow demands overhaul of architecture, memory/data resources and capabilities. For some implementations, the objective is to scrutinize and understand the data to identify trends (e.g. surveillance, data analysis, AI applications), On the other hand for other implementations, the intention is to take swift action based the data (e.g. self-driving cars, smart Internet of Things, robotics/drones). For many of these applications, local processing near the data is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the local processing end (Inference engines) there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications/environments, The Author will discuss how these industry challenges be addressed by Xilinx AI Engine Architecture & its Disruptive Inference Capability.