Hi, all:
I noticed https://docs.ultralytics.com/guides/nvidia-jetson/#install-ultralytics-package , torch and torchvision now only support JP6.0 . However, JP6.1 is already released, please see https://forums.developer.nvidia.com/t/jetpack-6-1-release-announcement/308185 . Can anybody help to build the most up-to-date torch and torchvision for JP6.1 ?
Cheers
BurhanQ
October 21, 2024, 12:49pm
2
I would recommend checking the Jetson Dockerfile in the Ultralytics repo for straightforward setup. AFAIK, this should work for any Jetpack version, but to be honest, I’ve never set up a Jetson device personally, you’ll likely need to modify this line in the Dockerfile:
- FROM nvcr.io/nvidia/l4t-jetpack:r36.3.0
+ FROM nvcr.io/nvidia/l4t-jetpack:r36.4.0
Since r36.4.0
has Jetpack 6.1 according to this NVIDIA page
Otherwise, see the NVIDIA Jetson Download Center for the latest PyTorch version 2.5.0
but doesn’t look like they have a wheel for torchvision
(yet).
1 Like
@BurhanQ
When I run the docker file:
➜ docker git:(main) sudo docker build -f Dockerfile-jetson-jetpack6 -t pytorch-jetson-jetpack6 .
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 32.77kB
Step 1/13 : FROM nvcr.io/nvidia/l4t-jetpack:r36.3.0
---> b859f440ba7f
Step 2/13 : ENV PYTHONUNBUFFERED=1 PYTHONDONTWRITEBYTECODE=1 PIP_NO_CACHE_DIR=1 PIP_BREAK_SYSTEM_PACKAGES=1
---> Using cache
---> 64666bd99085
Step 3/13 : ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf /root/.config/Ultralytics/
Downloading [==================================================>] 773.2kB/773.2kB
Downloading [==================================================>] 23.28MB/23.28MB
---> Using cache
---> 35b818fd7135
Step 4/13 : RUN apt-get update && apt-get install -y --no-install-recommends git python3-pip libopenmpi-dev libopenblas-base libomp-dev && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 22ac1844de4a
Step 5/13 : WORKDIR /ultralytics
---> Using cache
---> 97f964c29a89
Step 6/13 : COPY . .
---> Using cache
---> 8592eebb7d1b
Step 7/13 : RUN sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config
---> Running in 5eac5d2a3dc9
sed: can't read .git/config: No such file or directory
The command '/bin/sh -c sed -i '/^\[http "https:\/\/github\.com\/"\]/,+1d' .git/config' returned a non-zero code: 2
I tried NVIDIA Jetson Download Center for the latest PyTorch version 2.5.0
. However, when I ran it, I got:
➜ python
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 361, in <module>
from torch._C import * # noqa: F403
ImportError: libcusparseLt.so.0: cannot open shared object file: No such file or directory
>>>
Thank you @BurhanQ .
I need to overwrite a lot of things, including both the PyTorch located in NVIDIA Jetson Download Center and TorchVision … a lot more actually. Now I have
And, tried YOLO11 as well… Thank you
1 Like
BurhanQ
October 22, 2024, 12:20am
5
@jiapei100 glad to hear you got it working! It would be great if you could share the changes you made here or even better, open a PR to include those in the GitHub repo for others.
Glad to hear everything is working now @jiapei100 . As @BurhanQ mentioned, if you want to update to JetPack6.1 with Docker support, please do the changes. I am currently working on it and will update here once I made it officially available.
FYI: Since JetPack6.1 comes with TensorRT 10.3, you have access to latest TensorRT as well.
Also, for PyTorch and Torchvision, you can look into this website . You will find the latest .whl
files there. This is managed by Dustin Franklin from NVIDIA who is famous for dusty-nv/jetson-containers .
So I may update the new JetPack6.1 with these new PyTorch and Torchvision versions as well.
Thanks for your patience.
1 Like
vinura
November 25, 2024, 9:21am
7
I came across the same problem with Jetson AGX / Jetpack 6.1
I used torch from Dustin’s Site.
wget http://jetson.webredirect.org/jp6/cu126/+f/5cf/9ed17e35cb752/torch-2.5.0-cp310-cp310-linux_aarch64.whl#sha256=5cf9ed17e35cb7523812aeda9e7d6353c437048c5a6df1dc6617650333049092
pip install torch-2.5.0-cp310-cp310-linux_aarch64.whl
then installed them respective torchvision from Dustin’s site.
wget http://jetson.webredirect.org/jp6/cu126/+f/5f9/67f920de3953f/torchvision-0.20.0-cp310-cp310-linux_aarch64.whl#sha256=5f967f920de3953f2a39d95154b1feffd5ccc06b4589e51540dc070021a9adb9
pip install torchvision-0.20.0-cp310-cp310-linux_aarch64.wh
then installed onnxruntime 1.20 from justin s site.
wget http://jetson.webredirect.org/jp6/cu126/+f/0c4/18beb3326027d/onnxruntime_gpu-1.20.0-cp310-cp310-linux_aarch64.whl#sha256=0c418beb3326027d83acc283372ae42ebe9df12f71c3a8c2e9743a4e323443a4
pip install onnxruntime_gpu-1.20.0-cp310-cp310-linux_aarch64.whl
Downgraded the numpy as usual.
and lastly installed libcusparselt0 from nvidia site.
1 Like
Hello @jiapei100 We have updated with JetPack6.1 which includes TensorRT 10.3 support with Docker:
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Builds ultralytics/ultralytics:jetson-jetpack6 image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
# Supports JetPack6.1 for YOLO11 on Jetson AGX Orin, Orin NX and Orin Nano Series
# Start FROM https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-jetpack
FROM nvcr.io/nvidia/l4t-jetpack:r36.4.0
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=1 \
PIP_BREAK_SYSTEM_PACKAGES=1
# Downloads to user config dir
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.ttf \
https://github.com/ultralytics/assets/releases/download/v0.0.0/Arial.Unicode.ttf \
/root/.config/Ultralytics/
# Install dependencies
ADD https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb .
This file has been truncated. show original
This will do the job!
t=ultralytics/ultralytics:latest-jetson-jetpack6
sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
Thanks for waiting!
CC: @BurhanQ
Downgraded to which version of numpy i am facing error with c-api in torch
Hi @Venkat1405 ,
It should be numpy 1.23.5
.
We are already handling that here when you do pip install ultralytics[export]
on the Jetson:
export = [
"onnx>=1.12.0", # ONNX export
"coremltools>=7.0; platform_system != 'Windows' and python_version <= '3.11'", # CoreML supported on macOS and Linux
"scikit-learn>=1.3.2; platform_system != 'Windows' and python_version <= '3.11'", # CoreML k-means quantization
"openvino>=2024.0.0", # OpenVINO export
"tensorflow>=2.0.0", # TF bug https://github.com/ultralytics/ultralytics/issues/5161
"tensorflowjs>=3.9.0", # TF.js export, automatically installs tensorflow
"tensorstore>=0.1.63; platform_machine == 'aarch64' and python_version >= '3.9'", # for TF Raspberry Pi exports
"keras", # not installed automatically by tensorflow>=2.16
"flatbuffers>=23.5.26,<100; platform_machine == 'aarch64'", # update old 'flatbuffers' included inside tensorflow package
"numpy==1.23.5; platform_machine == 'aarch64'", # fix error: `np.bool` was a deprecated alias for the builtin `bool` when using TensorRT models on NVIDIA Jetson
"h5py!=3.11.0; platform_machine == 'aarch64'", # fix h5py build issues due to missing aarch64 wheels in 3.11 release
]
solutions = [
"shapely>=2.0.0", # shapely for point and polygon data matching
"streamlit", # for live inference on web browser i.e `yolo streamlit-predict`
]
logging = [
"comet", # https://docs.ultralytics.com/integrations/comet/
"tensorboard>=2.13.0",
"dvclive>=2.12.0",
1 Like