Many new functions of next-generation cars, like driver assistance, require a greater need for the ability to perceive what’s happening around the vehicle in all driving conditions. Xilinx is the ideal solution to handle these needs through our flexible and re-programmable Automotive platforms.
From distributed smart sensors and centralized multi-sensor fusion systems, to highly integrated domain controllers, developers can scale their device selection to meet specific processing needs and cost targets. Our FPGAs and SoCs/MPSoCs offer parallel processing and integration, which provides a flexible solution to meet various safety and design requirements for automotive systems.
Our customer and partner collaborations are transforming the new era of driver-assisted vehicles, enabling cars to become more adaptive and intelligent.
All driver-assisted situations – whether low-speed, on the freeway, or anywhere in between – require the highest level of safety and reliability from every component within the vehicle. Forward Camera Systems are seeing exponential growth in the automotive market, primarily due to their impact on vehicle safety. Our solutions are the platform of choice to enable not only today’s features, but to meet requirements for the next generation of innovation. Xilinx Zynq® SoC/MPSoC device families have remained well-aligned with Forward Camera processing requirements, which has allowed us to participate in multiple generations of Forward Camera production deployments.
The industry is experiencing a transition from traditional Computer Vision (CV) algorithms to AI-based processing methods for Forward Camera perception. We are at the forefront of this transition and offer unparalleled capabilities through our unique programmable logic to host customers’ legacy CV, while at the same enabling efficient AI engine processors with Versal ACAP – thus supplying the right engine for the right task. Scalable, application-optimized AI engine overlays into our programmable logic fabric is available from both Xilinx and our ecosystem partners.
Driver-assisted vehicles require surround view capabilities, specifically for low-speed driving situations such as parking assistance, object detection, and – in future generations – valet parking. Our platforms have powered advanced video processing in multiple generations of view enhancement and surround view systems.
We – along with our ecosystem partners – create silicon and IP solutions such as hardware-based video warping acceleration to enable efficient, low latency distortion correction for fisheye lenses, perspective projection, and the stitching of multiple video frames. Combined with application software, this IP dynamically adjusts the position of a virtual flying camera to create a natural looking image of the vehicle’s surroundings in a three-dimensional hemispheric view displayed in fine-detail resolution.
Xilinx and our ecosystem partners’ solutions are constantly evolving and improving, all while ensuring safety, security, and reliability are always present when powering next-generation ADAS systems.
Display: Full HD & Beyond
Camera: 4ch+ up to 2MPixel
Multiple types of RADAR sensors, whether short-range, mid-range, or long-range, are required to have a complete driver-assisted vehicle architecture. 4D RADAR demands extensive use of simultaneous processing pipelines, which can be realized in Xilinx programmable logic fabric. Our devices enabled first-generation deployments of automotive RADAR sensors and are now being adopted for the next generation of RADAR sensor processing and control.
Higher resolution performance in all four dimensions (range, azimuth, elevation, and doppler) is necessary to support localization and mapping needs for assisted driving systems. Transitioning from legacy 2D RADAR to 4D Imaging RADAR results in the need to process multiple data receive channels in parallel. This calls for hardware-accelerated solutions that offer independent pipelined data paths.
The programmable logic and numerous DSP blocks in our SoCs/MPSoCs support the parallel processing necessary for independent, yet simultaneous, pipelines to keep up with the very high data bandwidths of real-time sensor input. The programmability of this architecture enables end product differentiation with complete control of the IP, helping system designers keep up with evolving feature requirements.
Driver-assisted systems require the use of various types and quantities of LiDAR, RADAR, and Camera systems, providing sensor redundancy to mitigate false positives. At the heart of these redundant systems is our flexible and scalable programmable technology.
Our approach to addressing the needs of the LiDAR market is unique. Given that our products are software and hardware re-programmable, we can successfully meet the ever-changing system requirements of the LiDAR market with our unparalleled adaptability.
Multiple companies are using an array of Xilinx devices to develop innovative, differentiated solutions for the LiDAR market. Here is a glimpse:
Type of LiDAR: Scanning, Solid-state, Multi-beam flash
Xilinx Product Family: Zynq-7000®
Type of LiDAR: Solid-state flash
Xilinx Product Family: Zynq® Ultrascale+™ MPSoC
Type of LiDAR: Semiconductor-based Solid-state Scanning
Xilinx Product Family: Zynq Ultrascale+ RFSoC
Type of LiDAR: Spectrum-Scan™
Xilinx Product Family: Zynq-7000
Type of LiDAR: Mechanical, Solid-state MEMS & Flash
Xilinx Product Family: Zynq Ultrascale+ MPSoC
Type of LiDAR: Solid-state Scanning
Xilinx Product Family: Artix®-7 and Zynq-7000
Type of LiDAR: Solid-state
Xilinx Product Family: Zynq Ultrascale+ MPSoC
Type of LiDAR: Solid-state, LiDAR Distance Sensor, Long Range Solid-state
Xilinx Product Family: Spartan®-6
Type of LiDAR: Navigation, Mapping
Xilinx Product Family: Spartan-6, Kintex®-7
In-vehicle driver and occupant monitoring systems rely on AI inference to quickly and accurately identify vehicle occupants’ identity, their emotions, gestures, interior preferences, and much more. These systems, especially when constrained by adverse thermal environments, rely on power-efficient solutions. They also require low latency to provide fast responses to occupant gestures. Our MPSoC devices are an ideal platform for AI acceleration and provide the additional flexibility needed to customize the in-cabin experience.
MBUX Interior Assistant: