I have known that the Live Inference with Streamlit Application using Ultralytics YOLO11 supports PyTorch models (.pt extenstion), but I would like that it also allows users to import other model formats, such as .onnx, openvino, etc., and thus can I open a feature request to add this support on GitHub? Thank you very much.
Hello! Thanks for the excellent suggestion.
While our Streamlit solution is a simplified example primarily focused on .pt models for ease of use, the underlying YOLO class can load a wide variety of formats, including ONNX and OpenVINO.
This functionality is powered by our AutoBackend module, which automatically selects the correct inference backend based on the model’s file extension. You can easily adapt the Streamlit script to load your exported model by passing the correct path, for instance: model = YOLO('path/to/your_model.onnx').
You can see a full list of supported export formats in the AutoBackend reference documentation.
I hope this helps you get started
@pderrenger Execuse me, but I am still confused about these things:
- Does the Live Inference with Streamlit Application using Ultralytics YOLO11 support any other model formats besides
.ptat the moment? - How can I adapt the Streamlit script to load my exported model by passing the correct path? Can you give me a clearer example of this?
You can open a feature request