I have an Orin Nano Devkit with Jetpack6.2 and all the relevant python libraries installed successfully. I was able to convert a pytorch network into a GPU-enabled TensortRT model (*.engine) and was able to run basic object-detection with this CLI statement:
For my robotics project with feedback control, I need to implement Yolov11 object-detection in C++. Reduced latency is very important. Does Ultralytics and the community have a C++ example for Yolov11 running on the Orin Nano with Jetpack6.2?
There are many community examples here in the repo on how to run inference using an Ultralytics model in other languages, however I don’t see one for C++ and TensorRT. That said, if you’re familiar with C++, then it likely wouldn’t be a huge amount of work to translate the python TensorRT inference code into the TensorRT C++ API, since they likely share most (if not all) the same objects and methods.
Additionally, prior to working on translation to C++ inference, you should consider quantizing your model to improve inference speeds. Using half=True should provide a very reasonable gain to inference speeds without any performance loss. Using int8=True and providing calibration data will provide an even greater boost to the inference speeds, but can come with slightly reduced detection performance. You can see the results of various quantizations in the docs, on the TensorRT integrations page TensorRT - Ultralytics YOLO Docs.