Xilinx is now part ofAMDUpdated Privacy Policy


Delivering innovative, reliable solutions for various ADAS applications through our platforms’ unparalleled adaptivity


Xilinx Powering Advanced Sensing Technologies

Many new functions of next-generation cars, like driver assistance, require a greater need for the ability to perceive what’s happening around the vehicle in all driving conditions. Xilinx is the ideal solution to handle these needs through our flexible and re-programmable Automotive platforms.

From distributed smart sensors and centralized multi-sensor fusion systems, to highly integrated domain controllers, developers can scale their device selection to meet specific processing needs and cost targets. Our FPGAs and SoCs/MPSoCs as well as Versal ACAPs offer parallel processing and integration, which provides a flexible solution to meet various safety and design requirements for automotive systems.

Our customer and partner collaborations are transforming the new era of driver-assisted vehicles, enabling cars to become more adaptive and intelligent.

Featured Collaborations

Featured Documents

Forward Camera

Forward Camera Evolution of Success

All driver-assisted situations – whether low-speed, on the freeway, or anywhere in between – require the highest level of safety and reliability from every component within the vehicle. Forward Camera Systems are seeing exponential growth in the automotive market, primarily due to their impact on vehicle safety. Our solutions are the platform of choice to enable not only today’s features, but to meet requirements for the next generation of innovation. Xilinx Zynq® SoC/MPSoC device families have remained well-aligned with Forward Camera processing requirements, which has allowed us to participate in multiple generations of Forward Camera production deployments.

The industry is experiencing a transition from traditional Computer Vision (CV) algorithms to AI-based processing methods for Forward Camera perception. We are at the forefront of this transition and offer unparalleled capabilities through our unique programmable logic to host customers’ legacy CV, while at the same enabling efficient AI engine processors with Versal ACAP – thus supplying the right engine for the right task. Scalable, application-optimized AI engine overlays into our programmable logic fabric is available from both Xilinx and our ecosystem partners.


GEN 2: Zynq®-7000

Camera: VGA/WVGA

Camera: Up to 2 Mpixel

Lane Departure Warning, Speed Alert, Collision Mitigation

Xilinx Value:

  • Optimal HW/SW Partitioning
  • Scalability
  • Differentiation

GEN 3: Zynq® UltraScale+™ MPSoC

Camera: Up to 4 Mpixel

Broader Protection:

  • e.g. Pedestrian/Cyclist Protection

Vehicle Convenience Control:

  • e.g. Traffic Jam Assist

Xilinx Value:

  • Heterogeneous Processors
  • Tightly Coupled Application SW and Custom HW Accelerators
  • Safety Island for FuSa Functions

Future: Versal ACAP

Camera: Up to 8 Mpixel

System Features:

  • Level 2/3 Automation
  • Urban and Highway Scenarios

Xilinx Value:

  • Higher Data Bandwidth Channels
  • High Performance / Low Power CNN Processing for Environment Cognition
  • Advancing FuSa
Surround View

Surround View Evolution of Success

Driver-assisted vehicles require surround view capabilities, specifically for low-speed driving situations such as parking assistance, object detection, and – in future generations – valet parking. Our platforms have powered advanced video processing in multiple generations of view enhancement and surround view systems.

We – along with our ecosystem partners – create silicon and IP solutions such as hardware-based video warping acceleration to enable efficient, low latency distortion correction for fisheye lenses, perspective projection, and the stitching of multiple video frames. Combined with application software, this IP dynamically adjusts the position of a virtual flying camera to create a natural looking image of the vehicle’s surroundings in a three-dimensional hemispheric view displayed in fine-detail resolution.

Xilinx and our ecosystem partners’ solutions are constantly evolving and improving, all while ensuring safety, security, and reliability are always present when powering next-generation ADAS systems.


GEN 1: Spartan®-6

Bird's Eye View

Display: VGA/HD

Camera: 4ch VGA / 1MPixel


  • 2D Surround (Bird’s Eye View)
  • Video Processing (Sensor Interfaces, Pixel Manipulation, JPEG Overlay, Display Drive)

GEN 2: Zynq®-7000

3D View

Display: VGA/HD

Camera: 4ch 1MPixel


  • 3D Surround (Bowl View)
  • Parking Assistance and Rear Backup Camera Functionality with Overlays
  • Video Processing and Image Analytics for Assistance Beyond “View Only”
    • e.g.  Trailer Guidance Assistance

GEN 3: Zynq® UltraScale+™ MPSoC

Obstacle Detection

Display: Full HD & Beyond

Camera: 4ch+ up to 2MPixel

  • More Adaptability


  • Dynamic 3D Surround (Flying Camera)
  • Hi-Res Graphic Animation
  • Advanced Trailer Hitch
  • Sensor Fusion
  • Machine Vision Object Detection to Enable Vehicle Control and Automatic Emergency Braking (AEB) for Low-Speed Conditions
  • FuSa Compliant

Future: Versal ACAP

Valet Parking

Display: VGA to 4K Ultra HD

Camera: up to 8ch 4MPixel


  • Next Gen 3D Surround (Enhanced HMI)
  • Automated Vehicle: Valet Parking
  • Autonomous Control
  • Machine Learning/CNN-Based Object Classification and Perception
  • Extended Sensor Fusion

Powering Next-Generation RADAR

Multiple types of RADAR sensors, whether short-range, mid-range, or long-range, are required to have a complete driver-assisted vehicle architecture. 4D RADAR demands extensive use of simultaneous processing pipelines, which can be realized in Xilinx programmable logic fabric. Our devices enabled first-generation deployments of automotive RADAR sensors and are now being adopted for the next generation of RADAR sensor processing and control.


4D Imaging RADAR

Higher resolution performance in all four dimensions (range, azimuth, elevation, and doppler) is necessary to support localization and mapping needs for assisted driving systems. Transitioning from legacy 2D RADAR to 4D Imaging RADAR results in the need to process multiple data receive channels in parallel. This calls for hardware-accelerated solutions that offer independent pipelined data paths.

The programmable logic and numerous DSP blocks in our SoCs/MPSoCs as well as Versal ACAPs support the parallel processing necessary for independent, yet simultaneous, pipelines to keep up with the very high data bandwidths of real-time sensor input. The programmability of this architecture enables end product differentiation with complete control of the IP, helping system designers keep up with evolving feature requirements.


Xilinx is the Global Leader in Providing Scalable Solutions for LiDAR

Driver-assisted systems require the use of various types and quantities of LiDAR, RADAR, and Camera systems, providing sensor redundancy to mitigate false positives. At the heart of these redundant systems is our flexible and scalable programmable technology.

Our approach to addressing the needs of the LiDAR market is unique. Given that our products are software and hardware re-programmable, we can successfully meet the ever-changing system requirements of the LiDAR market with our unparalleled adaptability.

Multiple companies are using an array of Xilinx devices to develop innovative, differentiated solutions for the LiDAR market. Here is a glimpse:


LiDAR Companies


Headquarters: USA

Type of LiDAR: Scanning, Solid-state, Multi-beam flash

Xilinx Product Family: Zynq-7000®


  • Ouster uses an all-semiconductor design, dramatically simplifying the sensor’s architecture to make it cheaper, smaller, and more durable - without sacrificing performance
  • Can capture ambient light, allowing it to capture 2D images (like a camera) using only the LiDAR sensor

Headquarters: USA

Type of LiDAR: Scanning, or Hybrid Solid-state

Xilinx Product Family: Zynq-7000


  • Solid hybrid design to achieve balance of visibility, Field of View (FoV), and frame rate, while achieving automotive grade reliability
  • Strong believer in sensor fusion to achieve L4 and beyond

Headquarters: USA

Type of LiDAR: FMCW 3D imaging, Solid-state, Scanning

Xilinx Product Family: Zynq Ultrascale+ RFSoC


  • Silicon photonics chip level integrated optical engine built from the ground up for 3D imaging, complemented by FMCW ranging
  • Enables high performance at low cost, power, and form factor

Headquarters: Canada

Type of LiDAR: Solid-state flash

Xilinx Product Family: Zynq® Ultrascale+™ MPSoC


  • Phantom Intelligence digital processing technologies greatly improve the detection, reliability, and range of conventional analog LiDAR for more confidence in threat assessment
  • Unique IP licensing model for automotive LiDAR and sensor fusion

Headquarters: Israel

Type of LiDAR: Semiconductor-based Solid-state Scanning

Xilinx Product Family: Zynq Ultrascale+ RFSoC


  • True solid-state with no moving parts; semiconductor-based
  • Ultrafast scanning with 1000 FPS raw data

Headquarters: Germany

Type of LiDAR: Solid-state, Scanning, MEMS, 905nm

Xilinx Product Family: Zynq Ultrascale+ MPSoC


  • Proprietary silicon MEMS mirror technology specifically developed for LiDAR applications to provide high resolution, long detection range, and a wide field of view
  • Targeted at automotive-grade performance with the cost and size needed for the mass market


Headquarters: Australia

Type of LiDAR: Spectrum-Scan™

Xilinx Product Family: Zynq-7000


  • Spectrum-Scan is a software-defined LiDAR, giving the user full programmatic control over how the vehicle sees by changing the LiDAR resolution, FoV and focus of attention on-the-fly
  • Spectrum-Scan LiDAR provides inherent protection against interference through the unique use of multiple light wavelengths. This creates multiple filters for rejecting interference from sunlight or other LiDAR

Headquarters: China

Type of LiDAR: Mechanical, Solid-state MEMS & Flash

Xilinx Product Family: Zynq Ultrascale+ MPSoC


  • High resolution Solid-state LiDAR for automotive usages
  • Smart LiDAR system with the LiDAR HW and AI perception algorithm SW

Headquarters: China

Type of LiDAR: Solid-state Scanning

Xilinx Product Family: Artix®-7 and Zynq-7000


  • Successfully achieved full interference rejection, and the technology has become a standard production feature since August 2018
  • Of the 62 companies in California with self-driving car test permits, over 1/3 are Hesai’s customers

Headquarters: China

Type of LiDAR: Solid-state

Xilinx Product Family: Zynq Ultrascale+ MPSoC


  • Designed for real, autonomous-grade reliability
  • The most competitive performance thus far in MEMS LiDAR

Headquarters: China

Type of LiDAR: Solid-state, LiDAR Distance Sensor, Long Range Solid-state

Xilinx Product Family: Spartan®-6


Headquarters: China

Type of LiDAR: Navigation, Mapping

Xilinx Product Family: Spartan-6, Kintex®-7

In-Cabin Monitoring

Propelling AI within In-Cabin Occupant Monitoring

In-vehicle driver and occupant monitoring systems rely on AI inference to quickly and accurately identify vehicle occupants’ identity, their emotions, gestures, interior preferences, and much more. These systems, especially when constrained by adverse thermal environments, rely on power-efficient solutions. They also require low latency to provide fast responses to occupant gestures. Our Zynq® UltraScale+™ MPSoCs and Versal™ ACAPs are an ideal platform for AI acceleration and provide the additional flexibility needed to customize the in-cabin experience.

You can experience our technology first-hand in the new Mercedes-Benz GLE and CLA models.

MBUX Interior Assistant:

  • AI-based gesture input system, powered by Zynq UltraScale+ MPSoCs and Versal ACAPs
  • Data-flow based AI engine avoids storing data to reduce latency
  • Recognizes the occupants’ natural movements so the vehicle can predict driver and passenger requests
  • Distinguishes between driver and passenger gestures
  • Reacts to body language to automate comfort functions