Quantization issue with YOLO26n → RKNN on RADXA5B: outputs scores=0.0, class_id=0.0

Hellow everyone!

I’m working on object detection using the YOLO26 model on the RADXA5B platform. The key requirement is high detection speed — I aim to achieve real‑time performance. I chose the YOLO26n model because the documentation states that it’s optimized for small objects.

However, I’m encountering an issue: after model quantization, all scores and class_id values are 0,0. Here’s what I’m doing: First, I export my trained YOLO26n model (trained on the required classes) to ONNX format. This gives me an output shape of [1,300,6]. Then, I use the rknn-toolkit2 to convert the model to RKNN format. When I try to export directly via Ultralytics’ documentation using: model.export(format=‘rknn’, imgsz=640, end2end=True) I don’t get the desired output shape [1,300,6] — instead, I get [1,8,8400].

At this stage (before quantization), I test the model on RADXA5B, and detection works correctly — bounding boxes appear around objects as expected.

But when I run the following quantization code:

# preprocess config
print(‘–> Config model’)
rknn.config(mean_values=[[0, 0, 0]],
std_values=[[255, 255, 255]],
quant_img_RGB2BGR=False,
target_platform=‘rk3588’,
quantized_algorithm=‘normal’,
quantized_method=‘layer’,
optimization_level=3,
quantized_dtype=‘asymmetric_quantized-8’ # ‘w8a8’
)
print(‘done’)

Load ONNX model

print(‘–> Loading model onnx’)
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
print(“load model failed!”)
exit(ret)

Build model

print(‘–> Building model’)
ret = rknn.build(do_quantization=QUANTIZE_ON,
dataset=DATASET)
if ret != 0:
print(‘Build model failed’)
exit(ret)
print(‘Done’)

The code runs, but: scores and class_id are all 0,0; coordinates are present but incorrect.

Here’s an example of the output

[array([[[467.60233, 547.6712, 518.84644, 589.30707, 0.0, 0.0],
[493.22437, 547.6712, 534.8602, 589.30707, 0.0, 0.0],
[493.22437, 547.6712, 534.8602, 589.30707, 0.0, 0.0]])

I’ve tried various configurations during quantization, but in some cases, I even get a SEGMENTATION FAULT error.

For quantization, I’ve preprocessed the images to match the model’s input size (640×640) with data type np.uint8.

Could you please help me figure out what’s going wrong?

YOLO26 with (1, 300, 6) output isn’t supported by RKNN. You need to use the one with NMS (end2end=False)`

Or you can use this branch:

You can read this issue:

The YOLO26 model with output [1,300,5] is exported to rknn format, only after exporting to onnx format. It also works on RADXA 5B, the image size is 640x640, the inference time is 0.05 seconds. Which is not bad, I want to quantize the model, but I get scores = 0.I’m currently studying the topics that you sent.

FWIW I recall that when the Ultralytics team investigated RKNN exports early on, we were not able to successfully quantize the models; seeing the same result as you. It’s been a while, and I was not doing the work directly, but paired with someone who was and I don’t know that this was ever resolved.