The natural language processing (NLP) SmartVision application continuously detects keywords spoken by the user and showcases keyword-based dynamic switching between multiple vision tasks and/or changes the display properties.
No, the application does not require any experience in FPGA design.
This application is free of charge from Xilinx.
Xilinx's NLP-SmartVision application has been primarily designed and tested with OnSemi’s AR1335 image sensor. You will have to update the design and application if adding another MIPI sensor.
The application should work with most USB microphones.
NLP-SmartVision will only work for the pre-defined 10 keywords. You can train the model with custom keywords and modify the application using the released app sources.
Learn all about adaptive SOMs, including examples of why and how they can be deployed in next-generation edge applications, and how smart vision providers benefit from the performance, flexibility, and rapid development that can only be achieved by an adaptive SOM.
Demand for robotics is accelerating rapidly. Building a robot that is safe and secure and can operate alongside humans is difficult enough. But getting these technologies working together can be even more challenging. Complicating matters is the addition of machine learning and artificial intelligence which is making it more difficult to keep up with computational demands. Roboticists are turning toward adaptive computing platforms which offer lower latency and deterministic, multi-axis control with built-in safety and security on an integrated, adaptable platform that is expandable for the future. Read the eBook to learn more.