YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This is based on the implementation of Yolo-v7 found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found here.
Sign up for early access to run these models on a hosted Qualcomm® device.
Install the package via pip:
pip install "qai_hub_models[yolov7]"
Once installed, run the following simple CLI demo:
python -m qai_hub_models.models.yolov7.demo
More details on the CLI tool can be found with the --help
option. See
demo.py for sample usage of the model including pre/post processing
scripts. Please refer to our general instructions on using
models for more usage instructions.
This repository contains export scripts that produce a model optimized for on-device deployment. This can be run as follows:
python -m qai_hub_models.models.yolov7.export
Additional options are documented with the --help
option. Note that the above
script requires access to Deployment instructions for Qualcomm® AI Hub.
- The license for the original implementation of Yolo-v7 can be found here.
- The license for the compiled assets for on-device deployment can be found here
- YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
- Source Model Implementation
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.