You can try installing onnx
manually:
pip install onnx onnxruntime-gpu
then run export.
You can try installing onnx
manually:
pip install onnx onnxruntime-gpu
then run export.
@Toxite But when I started the export process, I got the following message:
requirements: Ultralytics requirement [āonnx>=1.12.0,<1.18.0ā] not found, attempting AutoUpdateā¦
Then, Ultralytics tried to install the version 1.17 of the onnx
package.
(Note: When I installed the onnx
package manually, the version to be installed was 1.18 by default.)
Does it continue after failing?
Can you post the full logs? These small snippets lack context
@Toxite The process still continued with the following message:
ONNX: starting export with onnx 1.18.0 opset 19ā¦
ONNX: slimming with onnxslim 0.1.64ā¦
ONNX: export success 701.3s, saved as 'best.onnx' (225.9 MB)
TensorRT: starting export with TensorRT 10.13.2.6...
Then itās fine
@Toxite Regarding to the Live Inference with Streamlit Application using Ultralytics YOLO11 through the CLI command yolo solutions inference model="path/to/model.pt"
, can I pass two models simultaneously to the command?
No, you canāt
@Toxite I have two concerns which need to be assisted:
data
argument. This would have already been shown as a warning in the logs. You should read the logs closely.yaml
file to the data
argument), but the accuracy metric was extremely low.