I successfully ran the YOLO12N and YOLO11Npose models on CPU using the ONNXRuntime library, implemented in R3Forth programming language.
videos:
I successfully ran the YOLO12N and YOLO11Npose models on CPU using the ONNXRuntime library, implemented in R3Forth programming language.
videos:
Awesome demos—ONNX Runtime + R3Forth is a cool combo! For stability and CPU speed, I’d recommend using YOLO11 over YOLO12; you can export YOLO11n and YOLO11n‑pose to ONNX directly:
yolo export model=yolo11n.pt format=onnx dynamic=True
yolo export model=yolo11n-pose.pt format=onnx dynamic=True
If you prefer Python, simplify=True can help with interoperability:
from ultralytics import YOLO
YOLO('yolo11n.pt').export(format='onnx', dynamic=True, simplify=True)
If others want to reproduce this, the steps in our ONNX export guide for YOLO11 cover CLI/Python usage and ONNX Runtime tips. If you can share your R3Forth bindings or a gist, we’d love to point the community to it.
Thank you for the feedback. I thought YOLO12 would be more advanced or faster, but I’ll test it myself to see if there’s any difference compared to YOLO11.
The camera example can be tested by downloading the language distribution from:
I haven’t included the ONNX files due to their size, but you can download them from here:
The encoding of the input and output tensors is done without using external libraries, so figuring out the details of how the data is processed took quite a bit of time, but I believe it results in faster execution.