Benchmark failed in colab,i don't know what i miss,


for i, model_path in enumerate(models_list):

    try:

        results = benchmark(
            model=model_path, 
            data=data_yaml, 
            imgsz=640, 
            half=False,   
            device="CPU",   
        )

        
    except Exception as e:
     
        continue

result :
±--------------------------------------------------------------------------------------------------------+
| Format Status​:white_question_mark: Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS |
+=========================================================================================================+
| 1 PyTorch :white_check_mark: 5.2 0.7548 165.56 6.04 |
| 2 TorchScript :white_check_mark: 10.4 0.7388 253.64 3.94 |
| 3 ONNX :white_check_mark: 10.1 0.7388 234.08 4.27 |
| 4 OpenVINO :white_check_mark: 10.2 0.7403 178.82 5.59 |
| 5 TensorRT :cross_mark: 0.0 - - - |
| 6 CoreML :cross_mark_button: 5.1 - - - |
| 7 TensorFlow SavedModel :cross_mark: 0.0 - - - |
| 8 TensorFlow GraphDef :cross_mark: 0.0 - - - |
| 9 TensorFlow Lite :cross_mark: 0.0 - - - |
| 10 TensorFlow Edge TPU :cross_mark: 0.0 - - - |
| 11 TensorFlow.js :cross_mark: 0.0 - - - |
| 12 PaddlePaddle :white_check_mark: 20.4 0.7403 339.48 2.95 |
| 13 MNN :white_check_mark: 10.0 0.739 185.79 5.38 |
| 14 NCNN :cross_mark: 0.0 - - - |
| 15 IMX :cross_mark: 0.0 - - - |
| 16 RKNN :cross_mark: 0.0 - - - |
±--------------------------------------------------------------------------------------------------------+

Error Imformation:

ERROR :cross_mark: TensorFlow SavedModel: export failure 2.0s: module ‘onnx.helper’ has no attribute ‘float32_to_bfloat16’
ERROR :cross_mark: Benchmark failure for TensorFlow SavedModel: module ‘onnx.helper’ has no attribute 'float32_to_bfloat16

ERROR :cross_mark: TensorFlow SavedModel: export failure 2.7s: module ‘onnx.helper’ has no attribute ‘float32_to_bfloat16’
ERROR :cross_mark: Benchmark failure for TensorFlow Lite: module ‘onnx.helper’ has no attribute ‘float32_to_bfloat16’

NCNN format also don’t work , but have not thow a ERROR…
THANK U FOR YOUR HELP :heart_hands:

Install onnx==1.19 before importing Ultralytics

thank you

That onnx.helper error is from an old onnx being used (and in Colab it often “sticks” if it was imported once). The fix only works if you install before importing ultralytics, then restart the runtime:

pip uninstall -y onnx
pip install -U onnx==1.19.0 ultralytics

After that, do Runtime → Restart runtime, then rerun your notebook from the top.

Also note that some red rows in benchmark() are expected depending on the environment: CoreML inference is macOS-only, and TensorRT requires TensorRT installed (not always available on standard Colab images). If you want to benchmark only what you care about, use the format argument mentioned in the Benchmark mode docs (e.g. format="onnx").