Greetings, I have been struggling for the past couple of days to understand and resolve the following error when trying to convert a stock YOLO11 Pytorch model to native TensorRT for the Jetson Orin AGX. The error I receive, regardless of which module source I use:
‘’’
yolo export model=“yolo11s.pt” format=engine half=True dynamic=True device=0
Ultralytics 8.3.115 Python-3.10.12 torch-2.7.0 CUDA:0 (Orin, 62841MiB)
YOLO11s summary (fused): 100 layers, 9,443,760 parameters, 0 gradients, 21.5 GFLOPs
PyTorch: starting from ‘yolo11s.pt’ with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (18.4 MB)
ONNX: starting export with onnx 1.17.0 opset 19…
ONNX: slimming with onnxslim 0.1.50…
/opt/rh/gcc-toolset-14/root/usr/include/c++/14/bits/stl_vector.h:1130: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator [with _Tp = unsigned int; _Alloc = std::allocator; reference = unsigned int&; size_type = long unsigned int]: Assertion ‘__n < this->size()’ failed.
Aborted (core dumped)
‘’’
The replacement models for pytorch were installed by:
‘’’
pip3 install --force --no-cache-dir torch torchvision torchaudio --index-url https://pypi.jetson-ai-lab.dev/jp/cu126
‘’’
Which, checking versions and GPU availability on the Orin AGX 64GB, produces:
‘’’
python3 -c “import torch, torchvision, torchaudio ; print(torch.version, torchvision.version,torchaudio.version) ; print(f’GPU available? {torch.cuda.is_available()}')”
2.7.0 0.22.0 2.7.0
GPU available? True
‘’’
I tried to resolve the error, by installing the following onnxruntime_gpu versions:
- ‘pip3 install --force onnxruntime_gpu-1.17.0-cp310-cp310-linux_aarch64.whl’
pip3 install --force onnxruntime_gpu-1.22.0-cp310-cp310-linux_aarch64.whl
After I had originally installed, per the instruction guide doc: pip install https://github.com/ultralytics/assets/releases/download/v0.0.0/onnxruntime_gpu-1.20.0-cp310-cp310-linux_aarch64.whl
Where I get the exact same error.
I can remove the module altogether and the export conversion completed albeit without the slimming through onnxslim (auto install was attempted but failes). Could anyone recommend a fix for this bug? Should I re-install from scratch an older version of JetPack? I was having problems with older versions of Pytorch and ultralytics, which was resolved by using the correct JetPack versions provided by nvidia, according to previous nvidia community user support messages.
thanks in advance for anyone’s time & advice!
-ted