New Release: Ultralytics v8.3.236

:rocket: Ultralytics v8.3.236 Release – Axelera Metis AIPU Support, Better Docs, Smoother Deployments

Hi everyone!

Ultralytics v8.3.236 is out :tada: This release is all about deployment and usability:

  • First‑class Axelera Metis AIPU export + inference for YOLO detection models
  • Stronger docs, navigation, and integrations (Neptune, prediction tutorial)
  • More robust example scripts and CI / version checks

If you deploy YOLO at the edge, track experiments, or rely on our examples/docs, this release should feel like a solid quality-of-life upgrade.

:backhand_index_pointing_right: You can view the full release details in the Ultralytics v8.3.236 GitHub release notes.


:glowing_star: Summary

Ultralytics 8.3.236 introduces:

  • New axelera export format to run YOLO detection models directly on Axelera Metis AIPUs, with a matching runtime backend
  • A fully rewritten Axelera integration guide that is now practical and end‑to‑end
  • A new Neptune integration page for experiment tracking with YOLO11
  • Updated prediction tutorial video tied to current YOLO11 APIs
  • Improved docs navigation, language switching, and embedded chat experience
  • Hardened ONNX / TFLite / RT-DETR examples and CI coverage for new toolchains

:fire: New Features

1. Axelera Metis AIPU Export & Inference

YOLO detection models (YOLOv8 and YOLO11) can now be exported and run on Axelera Metis AIPUs using a native Ultralytics workflow.

Export to Axelera format:

yolo export model=yolo11n.pt format=axelera
from ultralytics import YOLO

model = YOLO("yolo11n.pt")
model.export(format="axelera")

This creates an *_axelera_model/ directory (for example yolo11n_axelera_model/) containing:

  • Compiled .axm model
  • metadata.yaml with classes, image size, and other metadata

Run inference like any other YOLO model:

yolo predict model=yolo11n_axelera_model source=path/to/video.mp4
from ultralytics import YOLO

model = YOLO("yolo11n_axelera_model")
results = model("path/to/image.jpg")

Under the hood:

  • ONNX is exported and then compiled with the Axelera Voyager SDK
  • INT8 calibration uses a dedicated dataloader and custom preprocessing
  • The pipeline auto‑configures int8=True, opset=17, and a default data=coco128.yaml if none is provided

This integration is currently focused on object detection models and marked as experimental, but it closes the loop from:

Train → Export to axelera → Deploy on Metis AIPU


:books: Documentation & Integration Improvements

Axelera Integration Docs (Now Fully Usable)

The Axelera docs were upgraded from “coming soon” to a complete, working guide. The updated page:

  • Is linked in the codebase as the Axelera Metis AIPU integration
  • Clearly documents requirements:
    • Linux (Ubuntu 22.04 / 24.04)
    • Axelera hardware + drivers
    • Python 3.10
  • Walks through driver and SDK install with apt repo setup
  • Provides concrete examples for:
    • model.export(format="axelera") and yolo export ...
    • Inference using the *_axelera_model directory in both CLI and Python
  • Adds an export arguments table (imgsz, int8, data, fraction, device)
  • Gives calibration guidance (recommending 100–400 images, warning if <100)
  • Calls out known issues & limitations (PyTorch versions, first-run imports, power constraints)
  • Includes real‑world applications and recommended workflows for deploying YOLO with Metis

Neptune Integration Guide :brain:

A new Neptune integration page explains how to connect YOLO11 training runs to Neptune for experiment tracking:

  • Setup of API tokens via environment variables
  • Correct project= slug format such as workspace/project-name
  • CLI and Python examples for running training with automatic logging
  • Description of what gets logged (losses, mAP, images, mosaics, validation samples, weights, and artifacts)
  • Notes around Neptune’s SaaS lifecycle so users can plan their tracking strategy

You can see this work in the Neptune integrations PR by @glenn-jocher.

Updated YOLO11 Prediction Tutorial Video :movie_camera:

The predict mode documentation now embeds a new YouTube tutorial titled “How to Extract Results from Ultralytics YOLO11 Tasks for Custom Projects :rocket:”. This ensures the video examples match the current YOLO11 results API and patterns.

The embed and docs update were contributed in the video docs PR by @RizwanMunawar.


:compass: Docs, Navigation & UX

Several changes are aimed at making the docs smoother to browse and easier to localize:

  • Search disabled in MkDocs while a better solution is explored, with search.json removed during build to avoid broken experiences
  • Language switcher fixed and refactored so language changes:
    • Preserve the current path, query parameters, and hash
    • Avoid 404s from sitemap handling via a small JS wrapper
  • Lists normalized and alphabetized across Datasets, Guides, Solutions, and Integrations to reduce clutter and improve scanability
  • Chat widget upgraded to chat.min.js v0.1.6 for a smoother embedded Ultralytics LLM experience in the docs

These improvements landed via several PRs, including:


:test_tube: Tooling, Examples & CI

Axelera CI & Version Gating

To keep the new Axelera path stable:

  • CI matrix now includes PyTorch 2.8 / torchvision 0.23 on Ubuntu
  • New tests (test_export_axelera) verify that:
    • Export executes successfully
    • The target *_axelera_model directory is created
    • Artifacts are cleaned up after tests
  • New helpers introduced:
    • check_apt_requirements() to detect missing apt packages via dpkg -l and install them when needed (used for Axelera dependencies)
    • IS_PYTHON_3_10 constant to gate Axelera export
    • TORCH_2_8 flag for version checks

The main Axelera export and backend functionality was delivered in the Axelera export PR by @ambitious-octopus.

Hardening Example Scripts

Examples were updated to be more robust across machines and environments:

  • ONNXRuntime examples now auto-detect available providers using ort.get_available_providers() and prefer CUDAExecutionProvider when available, otherwise falling back to CPU
  • NMS outputs in ONNX/OpenCV examples are normalized with np.array(...).flatten() and explicit int casting to avoid index shape bugs
  • TFLite example now correctly respects the --img argument instead of using a hardcoded path
  • The urllib3 dependency in RT-DETR ONNX examples was bumped to 2.6.0 to keep the HTTP stack current

The dependency bump is captured in the urllib3 update PR by @dependabot.


:bullseye: Purpose & Impact

This release is aimed at:

  • Unlocking high-performance edge deployment on Axelera Metis AIPUs by letting you stay within the familiar Ultralytics workflow from training to export to inference
  • Reducing friction in environment setup with automatic checks for system dependencies and explicit Python/Torch version gates
  • Keeping documentation aligned with real-world usage, so the guides describe what actually works today, not what is planned
  • Improving docs browsing and internationalization so you can switch languages or navigate around without broken search or lost context
  • Making examples safer to copy-paste across CPU-only and GPU-enabled systems with fewer “it crashed on my machine” surprises

:memo: What’s Changed (PR Overview)

Here is a quick PR-level view of changes included in v8.3.236:

You can view the full changelog diff from v8.3.235 to v8.3.236 for more details.


:rocket: How to Upgrade & Try It

Upgrade to the latest Ultralytics package:

pip install -U ultralytics

Export a YOLO11 detection model to Axelera Metis AIPU format:

yolo export model=yolo11n.pt format=axelera imgsz=640 int8=True

Then run inference on the exported model:

yolo predict model=yolo11n_axelera_model source=path/to/source

Or via Python:

from ultralytics import YOLO

model = YOLO("yolo11n_axelera_model")
results = model("path/to/image.jpg")
for r in results:
    print(r.boxes.xyxy, r.boxes.conf, r.boxes.cls)

:speech_balloon: Feedback & Discussion

We would love to hear how this release works for you, especially if you:

  • Are deploying YOLO on Axelera Metis AIPUs
  • Use Neptune or other experiment tracking tools
  • Rely on our examples and docs for onboarding new projects or teammates

Please share your feedback, issues, and suggestions here in Discourse or in the Ultralytics GitHub issues and discussions. The improvements in this release come directly from community usage and reports, so every bit of feedback helps. :folded_hands:

On behalf of the Ultralytics team and the wider YOLO community, thanks for continuing to push the boundaries of what YOLO can do!

1 Like