The natural language processing (NLP) SmartVision application continuously detects keywords spoken by the user and showcases keyword-based dynamic switching between multiple vision tasks and/or changes the display properties.
No, the application does not require any experience in FPGA design.
This application is free of charge from Xilinx.
Xilinx's NLP-SmartVision application has been primarily designed and tested with OnSemi’s AR1335 image sensor. You will have to update the design and application if adding another MIPI sensor.
The application should work with most USB microphones.
NLP-SmartVision will only work for the pre-defined 10 keywords. You can train the model with custom keywords and modify the application using the released app sources.
Learn all about adaptive SOMs, including examples of why and how they can be deployed in next-generation edge applications, and how smart vision providers benefit from the performance, flexibility, and rapid development that can only be achieved by an adaptive SOM.