Ultralytics v8.3.236 Release – Axelera Metis AIPU Support, Better Docs, Smoother Deployments
Hi everyone!
Ultralytics v8.3.236 is out
This release is all about deployment and usability:
- First‑class Axelera Metis AIPU export + inference for YOLO detection models
- Stronger docs, navigation, and integrations (Neptune, prediction tutorial)
- More robust example scripts and CI / version checks
If you deploy YOLO at the edge, track experiments, or rely on our examples/docs, this release should feel like a solid quality-of-life upgrade.
You can view the full release details in the Ultralytics v8.3.236 GitHub release notes.
Summary
Ultralytics 8.3.236 introduces:
- New
axeleraexport format to run YOLO detection models directly on Axelera Metis AIPUs, with a matching runtime backend - A fully rewritten Axelera integration guide that is now practical and end‑to‑end
- A new Neptune integration page for experiment tracking with YOLO11
- Updated prediction tutorial video tied to current YOLO11 APIs
- Improved docs navigation, language switching, and embedded chat experience
- Hardened ONNX / TFLite / RT-DETR examples and CI coverage for new toolchains
New Features
1. Axelera Metis AIPU Export & Inference
YOLO detection models (YOLOv8 and YOLO11) can now be exported and run on Axelera Metis AIPUs using a native Ultralytics workflow.
Export to Axelera format:
yolo export model=yolo11n.pt format=axelera
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.export(format="axelera")
This creates an *_axelera_model/ directory (for example yolo11n_axelera_model/) containing:
- Compiled
.axmmodel metadata.yamlwith classes, image size, and other metadata
Run inference like any other YOLO model:
yolo predict model=yolo11n_axelera_model source=path/to/video.mp4
from ultralytics import YOLO
model = YOLO("yolo11n_axelera_model")
results = model("path/to/image.jpg")
Under the hood:
- ONNX is exported and then compiled with the Axelera Voyager SDK
- INT8 calibration uses a dedicated dataloader and custom preprocessing
- The pipeline auto‑configures
int8=True,opset=17, and a defaultdata=coco128.yamlif none is provided
This integration is currently focused on object detection models and marked as experimental, but it closes the loop from:
Train → Export to
axelera→ Deploy on Metis AIPU
Documentation & Integration Improvements
Axelera Integration Docs (Now Fully Usable)
The Axelera docs were upgraded from “coming soon” to a complete, working guide. The updated page:
- Is linked in the codebase as the Axelera Metis AIPU integration
- Clearly documents requirements:
- Linux (Ubuntu 22.04 / 24.04)
- Axelera hardware + drivers
- Python 3.10
- Walks through driver and SDK install with
aptrepo setup - Provides concrete examples for:
model.export(format="axelera")andyolo export ...- Inference using the
*_axelera_modeldirectory in both CLI and Python
- Adds an export arguments table (
imgsz,int8,data,fraction,device) - Gives calibration guidance (recommending 100–400 images, warning if <100)
- Calls out known issues & limitations (PyTorch versions, first-run imports, power constraints)
- Includes real‑world applications and recommended workflows for deploying YOLO with Metis
Neptune Integration Guide 
A new Neptune integration page explains how to connect YOLO11 training runs to Neptune for experiment tracking:
- Setup of API tokens via environment variables
- Correct
project=slug format such asworkspace/project-name - CLI and Python examples for running training with automatic logging
- Description of what gets logged (losses, mAP, images, mosaics, validation samples, weights, and artifacts)
- Notes around Neptune’s SaaS lifecycle so users can plan their tracking strategy
You can see this work in the Neptune integrations PR by @glenn-jocher.
Updated YOLO11 Prediction Tutorial Video 
The predict mode documentation now embeds a new YouTube tutorial titled “How to Extract Results from Ultralytics YOLO11 Tasks for Custom Projects
”. This ensures the video examples match the current YOLO11 results API and patterns.
The embed and docs update were contributed in the video docs PR by @RizwanMunawar.
Docs, Navigation & UX
Several changes are aimed at making the docs smoother to browse and easier to localize:
- Search disabled in MkDocs while a better solution is explored, with
search.jsonremoved during build to avoid broken experiences - Language switcher fixed and refactored so language changes:
- Preserve the current path, query parameters, and hash
- Avoid 404s from sitemap handling via a small JS wrapper
- Lists normalized and alphabetized across Datasets, Guides, Solutions, and Integrations to reduce clutter and improve scanability
- Chat widget upgraded to
chat.min.js v0.1.6for a smoother embedded Ultralytics LLM experience in the docs
These improvements landed via several PRs, including:
- Docs section sorting in the alphabetized docs PR by @glenn-jocher
- Search JSON removal in the search cleanup PR by @glenn-jocher
- Zensical language switch 404 fixes in the 404 fix PR by @glenn-jocher
- Language switcher refactor in the link handling refactor PR by @glenn-jocher
- Chat JS version bumps in the chat.js v0.1.4 PR by @glenn-jocher and chat.min.js v0.1.6 PR by @glenn-jocher
Tooling, Examples & CI
Axelera CI & Version Gating
To keep the new Axelera path stable:
- CI matrix now includes PyTorch 2.8 / torchvision 0.23 on Ubuntu
- New tests (
test_export_axelera) verify that:- Export executes successfully
- The target
*_axelera_modeldirectory is created - Artifacts are cleaned up after tests
- New helpers introduced:
check_apt_requirements()to detect missingaptpackages viadpkg -land install them when needed (used for Axelera dependencies)IS_PYTHON_3_10constant to gate Axelera exportTORCH_2_8flag for version checks
The main Axelera export and backend functionality was delivered in the Axelera export PR by @ambitious-octopus.
Hardening Example Scripts
Examples were updated to be more robust across machines and environments:
- ONNXRuntime examples now auto-detect available providers using
ort.get_available_providers()and preferCUDAExecutionProviderwhen available, otherwise falling back to CPU - NMS outputs in ONNX/OpenCV examples are normalized with
np.array(...).flatten()and explicitintcasting to avoid index shape bugs - TFLite example now correctly respects the
--imgargument instead of using a hardcoded path - The
urllib3dependency in RT-DETR ONNX examples was bumped to2.6.0to keep the HTTP stack current
The dependency bump is captured in the urllib3 update PR by @dependabot.
Purpose & Impact
This release is aimed at:
- Unlocking high-performance edge deployment on Axelera Metis AIPUs by letting you stay within the familiar Ultralytics workflow from training to export to inference
- Reducing friction in environment setup with automatic checks for system dependencies and explicit Python/Torch version gates
- Keeping documentation aligned with real-world usage, so the guides describe what actually works today, not what is planned
- Improving docs browsing and internationalization so you can switch languages or navigate around without broken search or lost context
- Making examples safer to copy-paste across CPU-only and GPU-enabled systems with fewer “it crashed on my machine” surprises
What’s Changed (PR Overview)
Here is a quick PR-level view of changes included in v8.3.236:
- New Neptune integrations docs by @glenn-jocher in the Neptune integrations PR
- Chat script update to
v0.1.4by @glenn-jocher in the chat.js v0.1.4 PR urllib3bump from2.5.0to2.6.0in RT-DETR ONNXRuntime examples by @dependabot in the urllib3 bump PR- New YOLO11 prediction tutorial video added to docs by @RizwanMunawar in the video embed PR
- Chat embed updated to
chat.min.js v0.1.6by @glenn-jocher in the chat.min.js update PR - Docs sections sorted alphabetically by @glenn-jocher in the alphabetized docs PR
- Legacy search JSON removed from builds by @glenn-jocher in the search cleanup PR
- Language switch 404s fixed for Zensical sites by @glenn-jocher in the 404 fix PR
- Language switcher link handling refactored by @glenn-jocher in the language switch refactor PR
- Axelera export support for YOLO on Metis AIPU by @ambitious-octopus in the Axelera export PR
You can view the full changelog diff from v8.3.235 to v8.3.236 for more details.
How to Upgrade & Try It
Upgrade to the latest Ultralytics package:
pip install -U ultralytics
Export a YOLO11 detection model to Axelera Metis AIPU format:
yolo export model=yolo11n.pt format=axelera imgsz=640 int8=True
Then run inference on the exported model:
yolo predict model=yolo11n_axelera_model source=path/to/source
Or via Python:
from ultralytics import YOLO
model = YOLO("yolo11n_axelera_model")
results = model("path/to/image.jpg")
for r in results:
print(r.boxes.xyxy, r.boxes.conf, r.boxes.cls)
Feedback & Discussion
We would love to hear how this release works for you, especially if you:
- Are deploying YOLO on Axelera Metis AIPUs
- Use Neptune or other experiment tracking tools
- Rely on our examples and docs for onboarding new projects or teammates
Please share your feedback, issues, and suggestions here in Discourse or in the Ultralytics GitHub issues and discussions. The improvements in this release come directly from community usage and reports, so every bit of feedback helps. ![]()
On behalf of the Ultralytics team and the wider YOLO community, thanks for continuing to push the boundaries of what YOLO can do!