Installation

Downloading the Vitis AI Library

The Vitis™ AI Library package can be downloaded for free from the Vitis AI repository on GitHub.

Note: Use an Alveo™ card or an evaluation board that supports the Vitis AI Library to allow you to become familiar with the product. See the AI Developer Hub for more details about evaluation boards that support the Vitis AI Library. See the Alveo Accelerator Cards product page for more details about Alveo cards.

This release supports the following evaluation boards:

  • Xilinx® ZCU102
  • Xilinx ZCU104
  • Xilinx VCK190

This release supports the following Alveo cards:

  • Alveo U50
  • Alveo U50LV
  • Alveo U200
  • Alveo U250
  • Alveo U280

Setting Up the Host

For Edge

Use the following steps to set up the host for Edge device development.
  1. Download the sdk-2020.2.0.0.sh from here.
  2. Install the cross-compilation system environment.
    ./sdk-2020.2.0.0.sh
  3. Follow the prompts to install.
    Note: The ~/petalinux_sdk path is recommended for the installation. Regardless of the path you choose for the installation, make sure the path has read-write permissions. In this section, it is installed in ~/petalinux_sdk.
  4. When the installation is complete, follow the prompts and execute the following command:
    source ~/petalinux_sdk/environment-setup-aarch64-xilinx-linux
    If you close the current terminal, you need to re-execute the above instructions in the new terminal interface.
  5. Download the vitis_ai_2020.2-r1.3.x.tar.gz from here and install it to the PetaLinux system.
    tar -xzvf vitis_ai_2020.2-r1.3.x.tar.gz -C ~/petalinux_sdk/sysroots/aarch64-xilinx-linux
  6. Clone the Vitis AI repository:
    cd ~
    git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
  7. To compile the library sample in the Vitis AI Library, take classification for example, execute the following command.
    cd ~/Vitis-AI/demo/Vitis-AI-Library/samples/classification
    bash -x build.sh

    The executable program is now produced.

  8. To modify the library source code, view and modify them under ~/Vitis-AI/tools/Vitis-AI-Library.

    Before compiling the Vitis AI libraries, confirm the compiled output path. The default output path is $HOME/build.

    To change the default output path, modify the build_dir_default in cmake.sh. For example, you can change from build_dir_default=$HOME/build/build.${target_info}/${project_name} to build_dir_default=/workspace/build/build.${target_info}/${project_name}.

    Note: If you want to modify the build_dir_default, it is suggested to modify $HOME only.
  9. Build the libraries all at once by executing the following command.
    cd ~/Vitis-AI/tools/Vitis-AI-Library
    ./cmake.sh --clean

    After compiling, you can find the generated AI libraries under build_dir_default. If you want to change the compilation rules, check and change the cmake.sh in the library’s directory.

    Note: All the source codes, samples, demos, and head files can be found under ~/Vitis-AI/tools/Vitis-AI-Library.

For Cloud (U50/U50LV/U280)

Set up the host on the Cloud by running the docker image.
  1. Clone the Vitis AI repository.
    git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
    cd Vitis-AI
  2. Run Docker container according to the instructions in the docker installation guide.
    ./docker_run.sh -X xilinx/vitis-ai-cpu:<x.y.z>
    Note: A workspace folder is created by the docker runtime system and is mounted in /workspace of the docker runtime system.
  3. Place the program, data and other files to be developed in the workspace folder. After the docker system starts, locate them in the /workspace of the docker system.

    Do not put the files in any other path of the docker system. They will be lost after you exit the docker system.

  4. Select the model for your platform. You can find the latest models download links in the model's yaml files under Vitis-AI/models/AI-Model-Zoo.
    • If the /usr/share/vitis_ai_library/model folder does not exist, create it first.
      sudo mkdir -p /usr/share/vitis_ai_library/models
    • For DPUCAHX8H of U50, take resnet_v1_50_tf as an example.
      wget https://www.xilinx.com/bin/public/openDownload?filename=resnet_v1_50_tf-u50-r1.3.0.tar.gz -O resnet_v1_50_tf-u50-r1.3.0.tar.gz
      tar -xzvf resnet_v1_50_tf-u50-r1.3.0.tar.gz
      sudo cp resnet_v1_50_tf /usr/share/vitis_ai_library/models -r
    • For DPUCAHX8L of U50LV, take resnet_v1_50_tf as an example.
      wget https://www.xilinx.com/bin/public/openDownload?filename=resnet_v1_50_tf-u50-u50lv-u280-v3me-r1.3.0.tar.gz -O resnet_v1_50_tf-u50-u50lv-u280-v3me-r1.3.0.tar.gz
      tar -xzvf resnet_v1_50_tf-u50-u50lv-u280-v3me-r1.3.0.tar.gz
      sudo cp resnet_v1_50_tf /usr/share/vitis_ai_library/models -r
  5. Download the cloud xclbin package from here. Untar it, select the Alveo card, and install it. Take U50 as an example.
    tar -xzvf alveo_xclbin-1.3.0.tar.gz
    cd alveo_xclbin-1.3.0/U50/6E300M
    sudo cp dpu.xclbin hbm_address_assignment.txt /usr/lib
    For DPUCAHX8L, take U50lv as an example.
    tar -xzvf alveo_xclbin-1.3.0.tar.gz
    cd alveo_xclbin-1.3.0/U50lv-V3ME/1E250M
    sudo cp dpu.xclbin /opt/xilinx/overlaybins/
    export XLNX_VART_FIRMWARE=/opt/xilinx/overlaybins/dpu.xclbin
  6. If there is more than one card installed on the server and you want to specify some cards to run the program, you can set XLNX_ENABLE_DEVICES to achieve this function. The following is the usage of XLNX_ENABLE_DEVICES:
    • export XLNX_ENABLE_DEVICES=0 --only use device 0 for DPU
    • export XLNX_ENABLE_DEVICES=0,1,2 --select device 0, device 1 and device 2 to be used for DPU
    • If you do not set this environment variable, use all devices for DPU by default.
  7. To compile the library sample in the Vitis AI Library, take classification for example, execute the following command:
    cd /workspace/demo/Vitis-AI-Library/samples/classification
    bash -x build.sh

    The executable program is now produced.

  8. To modify the library source code, view and modify them under /workspace/tools/Vitis-AI-Library.

    Before compiling the AI libraries, confirm the compiled output path. The default output path is: $HOME/build.

    If you want to change the default output path, modify the build_dir_default in cmake.sh. Such as, change from build_dir_default=$HOME/build/build.${target_info}/${project_name} to build_dir_default=/workspace/build/build.${target_info}/${project_name}.

    Note: If you want to modify the build_dir_default, modify $HOME only.

    Execute the following command to build the libraries all at once:

    cd /workspace/tools/Vitis-AI-Library
    ./cmake.sh --clean

    After compiling, you can find the generated AI libraries under build_dir_default. If you want to change the compilation rules, check and change the cmake.sh in the library’s directory.

Scaling Down the Frequency of the DPU

Due to the power limitation of the card, all CNN models on each Alveo card cannot run at the highest frequencies. Sometimes frequency scaling-down operation is necessary.

The DPU core clock is generated from an internal DCM module driven by the platform Clock_1 with the default value of 100 MHz, and the core clock is always linearly proportional to Clock_1. For example, in U50LV-10E275M overlay, the 275 MHz core clock is driven by the 100 MHz clock source. So, to set the core clock of this overlay to 220 MHz, set the frequency of Clock_1 to (220/275)*100 = 80 MHz.

You could use XRT xbutil tools to scale down the running frequency of the DPU overlay before you run the VART/Library examples. Before the frequency scaling-down operation, the overlays should be programmed into the FPGA first. Refer to the following example commands to program the FPGA and scale down the frequency. These commands will set the Clock_1 to 80 MHz and can be run at host or in the docker.

/opt/xilinx/xrt/bin/xbutil program -p /usr/lib/dpu.xclbin
/opt/xilinx/xrt/bin/xbutil clock -d0 -g 80

d0 is the Alveo card device number. For more information about xbutil tool, see the XRT documents.

For Cloud (U200/U250)

Set up the host on the cloud by running the docker image.
  1. Clone the Vitis AI repository.
    $git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
    $cd Vitis-AI
  2. Run Docker container according to the instructions in the docker installation guide.
    $./docker_run.sh -X xilinx/vitis-ai-cpu:<x.y.z>
    Note: A workspace folder is created by the docker runtime system, and is mounted in /workspace of the docker runtime system.
  3. Activate the conda environment.
    $conda activate vitis-ai-caffe
  4. To modify the library source code, view and modify them under /workspace/Vitis-AI-Library. Before compiling the AI libraries, confirm the compiled output path. The default output path is $HOME/build. If you want to change the default output path, modify the `build_dir_default` in cmake.sh.
    build_dir_default=$HOME/build/build.${target_info}/${project_name}
    to build_dir_default=/workspace/build/build.${target_info}/$
    {project_name}
    Note: If you want to modify the build_dir_default, it is suggested to modify $HOME.
  5. Execute the following command to build all DPUCADX8G supported examples in the AI Library.
    $cd /workspace/Vitis-AI-Library
    $./cmake.sh --clean --type=release --cmake-options=-DCMAKE_PREFIX_PATH=$CONDA_PREFIX --cmake-options=-DENABLE_DPUCADX8G_RUNNER=ON

After successful building, you can find the generated AI libraries and executables under build_dir_default.

Note: If you want to change the compilation rules, check and change the cmake.sh in the library's directory.

AI Library File Locations

The following table shows the AI Library file location after the installation is complete.

Table 1. AI Library File Location List
Files Location
Source code of the libraries /workspace/tools/Vitis-AI-Library
Samples /workspace/demo/Vitis-AI-Library/samples
Apps /workspace/demo/Vitis-AI-Library/apps
Test /workspace/tools/Vitis-AI-Library/[model]/test
The following symbols/abbreviations are used.
  • /workspace/ is the path to extract the AI Library compressed package in the docker system.
  • “Samples” is used for rapid application construction and evaluation, and it is for users.
  • “Apps” provides more practical examples for user development, and it is for users.
  • “Test” is a test example for each model library which is for library developers.

Setting Up the Target

There are three steps to set up the target. The first step is to install the board image, the second step is to install the AI model packet, and the third step is to install the Vitis AI Library packet.

To improve the user experience, the Vitis AI Runtime packages, Vitis-AI-Library samples, and models are built into the board image. Therefore, you do not have to install Vitis AI Runtime packages and model package on the board separately. However, you can still install the model or Vitis AI Runtime on your own image or on the official image by following these steps.

Note: The version of the board image should be 2020.2 or above.

Step 1: Installing a Board Image

For ZCU102, the system images can be downloaded from here; for ZCU104, it can be downloaded from here. One suggested software application for flashing the SD card is Etcher. It is a cross-platform tool for flashing OS images to SD cards, available for Windows, Linux, and Mac systems. The following example uses Windows.
  1. Download Etcher from: https://etcher.io/ and save the file as shown in the following figure.

  2. Install Etcher, as shown in the following figure.

  3. Eject any external storage devices such as USB flash drives and backup hard disks. This makes it easier to identify the SD card. Then, insert the SD card into the slot on your computer, or into the reader.
  4. Run the Etcher program by double clicking on the Etcher icon shown in the following figure, or select it from the Start menu.

    Etcher launches, as shown in the following figure.



  5. Select the image file by clicking Select Image. You can select a .zip or .gz compressed file.
  6. Etcher tries to detect the SD drive. Verify the drive designation and the image size.
  7. Click Flash!.

  8. Insert the SD card with the image into the destination board.
  9. Plug in the power and boot the board using the serial port to operate on the system.
  10. Set up the IP information of the board using the serial port.

    You can now operate on the board using SSH.

Step 2: Installing AI Model Package

The Vitis AI Runtime packages, Vitis-AI-Library samples and models are built into the board image. Therefore, you do not have to install Vitis AI Runtime packages and model package on the board separately. However, you can still install the model on your own image or on the official image by following these steps:

  1. For each model, there is a yaml file which is used for describe all the details about the model. In the yaml, you will find the model's download links for different platforms. Choose the corresponding model and download it.
  2. Copy the downloaded file to the board using scp with the following command.
    scp <model>.tar.gz root@IP_OF_BOARD:~/
  3. Log in to the board (using ssh or serial port) and install the model package.
  4. If the /usr/share/vitis_ai_library/model folder does not exist, create it first.
    mkdir -p /usr/share/vitis_ai_library/models
  5. Install the model on the target side.
    tar -xzvf <model>.tar.gz
    cp -r <model> /usr/share/vitis_ai_library/models
    By default, the models are located in the /usr/share/vitis_ai_library/models directory on the target side.

Step 3: Installing AI Library Package

The Vitis AI Runtime packages, Vitis-AI-Library samples and models are built into the board image. Therefore, you do not have to install Vitis AI Runtime packages and model package on the board separately. However, you can still install the Vitis AI Runtime on your own image or on the official image by following these steps:

  1. Download the vitis-ai-runtime-1.3.x.tar.gz from here. Untar it and copy the following files to the board using scp.
    tar -xzvf vitis-ai-runtime-1.3.x.tar.gz
    scp -r vitis-ai-runtime-1.3.x/aarch64/centos root@IP_OF_BOARD:~/
    Note: You can take the RPM package as a normal archive, and extract the contents on the host side, if you only need some of the libraries. Only model libraries can be separated independently, while the others are common libraries. The operation command is as follows:
    rpm2cpio libvitis_ai_library-1.3.0-r<x>.aarch64.rpm | cpio -idmv
  2. Log in to the board using ssh.

    You can also use the serial port to log in.

  3. Run the zynqmp_dpu_optimize.sh script.
    cd ~/dpu_sw_optimize/zynqmp/
    ./zynqmp_dpu_optimize.sh
  4. Install the Vitis AI Library.
    cd ~/centos
    bash setup.sh
    You can also execute the following command to install the library one by one.
    cd ~/centos
    rpm -ivh --force libunilog-1.3.0-r<x>.aarch64.rpm
    rpm -ivh --force libxir-1.3.0-r<x>.aarch64.rpm
    rpm -ivh --force libtarget-factory-1.3.0-r<x>.aarch64.rpm
    rpm -ivh --force libvart-1.3.0-r<x>.aarch64.rpm
    rpm -ivh --force libvitis_ai_library-1.3.0-r<x>.aarch64.rpm

After the installation is complete, the directories are as follows.

  • Library files are stored in /usr/lib
  • The header files are stored in /usr/include/vitis/ai

Running Vitis AI Library Examples

Before running the Vitis AI Library examples on Edge or on Cloud, download the vitis_ai_library_r1.3.x_images.tar.gz and vitis_ai_library_r1.3.x_video.tar.gz. The images or videos used in the following example can be found in both packages.

For Edge

The Vitis AI Runtime packages, Vitis AI Library samples and models are built into the board image. You can run the examples directly. If you have a new program, compile it on the host side and copy the executable program to the target.

  1. Copy vitis_ai_library_r1.3.x_images.tar.gz and vitis_ai_library_r1.3.x_video.tar.gz from host to the target using scp with the following command:
    [Host]$scp vitis_ai_library_r1.3.x_images.tar.gz root@IP_OF_BOARD:~/
    [Host]$scp vitis_ai_library_r1.3.x_video.tar.gz root@IP_OF_BOARD:~/
  2. Untar the image and video packages on the target.
    cd ~
    tar -xzvf vitis_ai_library_r1.3*_images.tar.gz -C Vitis-AI/demo/Vitis-AI-Library
    tar -xzvf vitis_ai_library_r1.3*_video.tar.gz -C Vitis-AI/demo/Vitis-AI-Library
  3. Enter the extracted directory of example in target board and then compile the example. Take facedetect as an example.
    cd ~/Vitis-AI/demo/Vitis-AI-Library/samples/facedetect
  4. Run the example.
    ./test_jpeg_facedetect densebox_320_320 sample_facedetect.jpg
  5. View the running results.

    There are two ways to view the results. One is to view the results by printing information. The other way is to view the images by downloading the sample_facedetect_result.jpg image as shown in the following image:



  6. To run the video example, run the following command:
    ./test_video_facedetect densebox_320_320 video_input.webm -t 8

    where, video_input.webm is the name of the video file for input and -t is the <num_of_threads>. You must prepare the video file yourself.

    Note:
    • The official system image only supports video file input in webm or raw format. If you want to use video file in other format as the input, you have to install the relevant packages on the system, such as ffmpeg package.
    • Due to the limitation of video playback and display in the base platform system, it could only be displayed according to the frame rate of display standard, which could not reflect the real processing performance. But you can check the actual video processing performance, especially with the multithreading, with the following commands:
      env DISPLAY=:0.0 DEBUG_DEMO=1 ./test_video_facedetect \
      densebox_320_320 'multifilesrc location=~/video_input.webm \
      ! decodebin  !  videoconvert ! appsink sync=false' -t 2
  7. To test the program with a USB camera as input, run the following command:
    ./test_video_facedetect densebox_320_320 0 -t 8

    0: The first USB camera device node. If you have multiple USB camera, the value might be 1,2,3 etc. -t: <num_of_threads>

    IMPORTANT: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because all the video examples require a Linux windows system to work properly.
    export DISPLAY=192.168.0.10:0.0
  8. To test the performance of model, run the following command:
    ./test_performance_facedetect densebox_320_320 test_performance_facedetect.list -t 8 -s 60 

    -t: <num_of_threads>

    -s: <num_of_seconds>

    For more parameter information, enter -h for viewing.

  9. To run the demo, refer to Application Demos.

For Cloud (U50/U50LV/U280)

If you have downloaded Vitis-AI, enter to the Vitis-AI directory, and then start Docker.

  1. Enter the directory of the sample and then compile it. Take facedetect as an example.
    cd /workspace/demo/Vitis-AI-Library/samples/facedetect
    bash -x build.sh
  2. Run the sample.
    ./test_jpeg_facedetect densebox_320_320 sample_facedetect.jpg
  3. If you want to run the program in batch mode, which means that the DPU processes multiple images at once to prompt for processing performance, you have to compile the entire Vitis AI Library according to Setting Up the Host section. Then the batch program will be generated under build_dir_default. Enter build_dir_default, take facedetect as an example, execute the following command.
    ./test_facedetect_batch densebox_320_320 <img1_url> [<img2_url> ...]
  4. To run the video example, run the following command:
    ./test_video_facedetect densebox_320_320 <video_input.mp4> -t 8

    video_input.mp4: The name of the video file for input. You need to prepare the video file.

    -t: <num_of_threads>

  5. To test the performance of the model, run the following command:
    ./test_performance_facedetect densebox_320_320 test_performance_facedetect.list -t 8 -s 60 
    • -t: <num_of_threads>
    • -s: <num_of_seconds>

    For more parameter information, enter -h for viewing.

    Note: The performance test program is automatically run in batch mode.

For Cloud (U200/U250)

  1. Load and run the Docker Container.
    $./docker_run.sh -X xilinx/vitis-ai-cpu:<x.y.z>
  2. Download and untar the model directory [vai_lib_u200_u250_models.tar.gz] package.
    $cd /workspace/Vitis-AI-Library
    $wget -O vai_lib_u200_u250_models.tar.gz https://www.xilinx.com/bin/public/openDownload?filename=vai_lib_u200_u250_models.tar.gz
    $sudo tar -xvf vai_lib_u200_u250_models.tar.gz --absolute-names
    
    Note: All models will download to /usr/share/vitis_ai_library/models directory. Currently supported networks are classification, facedetect, facelandmark, reid, and yolov3.
  3. To download a minimal validation set for Imagenet2012 using Collective Knowledge (CK), refer to the Alveo examples.
  4. Setup the environment.
    $source /workspace/alveo/overlaybins/setup.sh
    $export LD_LIBRARY_PATH=$HOME/.local/${taget_info}/lib/:$LD_LIBRARY_PATH
  5. Make sure to compile the entire Vitis AI Library according to the For Cloud (U50/U50LV/U280) section. Run the classification image test example.
    $HOME/build/build.${taget_info}/${project_name}/test_classification <model_dir> <img_path>
    
    For example:
    $~/build/build.Ubuntu.18.04.x86_64.Release/Vitis-AI-Library/classification/test_classification inception_v1 <img_path>
  6. Run the classification accuracy test example.
    $HOME/build/build..${taget_info}/${project_name}/test_classification_accuracy <model_dir> <img_dir_path> <output_file>
    For example:
    $~/build/build.Ubuntu.18.04.x86_64.Release/Vitis-AI-Library/classification/test_classification_accuracy inception_v1 <img_dir_path> <output_file>

Support

You can visit the Vitis AI Library community forum on the Xilinx website for topic discussions, knowledge sharing, FAQs, and requests for technical support.