The Vitis™ AI development environment is Xilinx’s development platform for AI inference on Xilinx hardware platforms, including both edge devices and Alveo™ cards. It consists of optimized IP, tools, libraries, models, and example designs. It is designed with high efficiency and ease-of-use in mind, unleashing the full potential of AI acceleration on Xilinx FPGA and ACAP.
Open to all users rich and off-the-shelf deep learning models from the most popular frameworks, Pytorch, Tensorflow, Tensorflow 2 and Caffe. AI model zoo provides optimized and retrainable AI models, with which you will be able achieve faster deployment, performance acceleration and productization on all Xilinx platforms.
With world-leading model compression technology, we can reduce model complexity by 5x to 50x with minimal accuracy impact. Deep Compression takes the performance of your AI inference to the next level.
By converting the 32-bit floating-point weights and activations to fixed-point like INT8, the AI Quantizer can reduce the computing complexity without losing prediction accuracy. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.
Maps the AI model to a high-efficient instruction set and data flow. Also performs sophisticated optimizations such as layer fusion, instruction scheduling, and reuses on-chip memory as much as possible.
The performance profiler allows programmers to perform an in-depth analysis of the efficiency and utilization of your AI inference implementation.
The Vitis AI Library is a set of high-level libraries and APIs built for efficient AI inference with Deep-Learning Processor Unit (DPU). It is built based on the Vitis AI Runtime with unified APIs and provides easy-to-use interfaces for the AI model deployment on Xilinx platforms.
With Vitis AI, it is now possible to achieve real-time processing with the 3D perception AI algorithm on embedded platforms. The co-optimization from hardware and software speed up delivers leading performance of the state-of-art PointPillars model on Xilinx ZU+ MPSoC.
Latency determines the decision-making for autonomous driving cars when running at high speeds and encountering obstacles. With an innovated domain-specific accelerator and software optimization, Vitis AI empowers autonomous driving vehicles to process deep learning algorithms with ultra-low latency and higher performance.
With strong scalability and adaptability to fit across many low-end to high-end ADAS products, Vitis AI delivers industry-leading performance supporting popular AI algorithms for object detection, lane detection and segmentation in the front ADAS, and In-cabin or surround-view systems.
Cities are increasingly employing intelligence-based systems at the edge point and cloud end. The massive data generated every day requires a powerful end-to-end AI analytics system in order to quickly detect and process objects, traffic, and face behavior. This adds valuable insight to each frame from edge to cloud.
Learn more about Xilinx in Machine & Computer Vision >
Vitis AI offers powerful tools and IPs to uncover and identify hidden patterns from medical image data to help fight against disease and improve health.
Learn more about Xilinx in Healthcare AI >