Natural Language Processing – SmartVision

The natural language processing (NLP) SmartVision application continuously detects keywords spoken by the user and showcases keyword-based dynamic switching between multiple vision tasks and/or changes the display properties.

Smart Camera Accelerated Application Block Diagram

Features:

  • Live audio capture from USB Microphone
  • Video capture from AR1335 camera sensor
  • Key Word Spotting (KWS) & Keyword based dynamic switching of vision tasks (Face detect / Object detect / Plate detect)
  • HDMI or DisplayPort out
  • Complete application including HW design
Frequently Asked Questions

No, the application does not require any experience in FPGA design.

This application is free of charge from Xilinx.

Xilinx's NLP-SmartVision application has been primarily designed and tested with OnSemi’s AR1335 image sensor. You will have to update the design and application if adding another MIPI sensor.

The application should work with most USB microphones.

NLP-SmartVision will only work for the pre-defined 10 keywords. You can train the model with custom keywords and modify the application using the released app sources.

Featured Documents
Accelerate Your AI-Enabled Edge Solution with Adaptive Computing

Introducing Adaptive System-on-Modules

Learn all about adaptive SOMs, including examples of why and how they can be deployed in next-generation edge applications, and how smart vision providers benefit from the performance, flexibility, and rapid development that can only be achieved by an adaptive SOM.

Accelerate Your AI-Enabled Edge Solution with Adaptive Computing