New Release: Ultralytics v8.4.34

Ultralytics v8.4.34 is out :rocket:

Quick summary: Ultralytics v8.4.34 is a tuning and stability-focused release with one standout new capability: multi-dataset hyperparameter tuning in a single run :brain:. This update also brings more reliable training resume behavior, thread-safe ONNX export, several core runtime fixes, and broad documentation refreshes across Ultralytics YOLO and the Ultralytics Platform.

If you’re training across mixed domains, exporting in parallel workflows, or deploying on edge devices, this release is definitely worth a look :backhand_index_pointing_down:

:glowing_star: Highlights

:brain: Multi-dataset hyperparameter tuning

The biggest addition in v8.4.34 comes from PR #24067 by @Laughing-q with support for passing multiple datasets to model.tune().

What’s new:

  • data can now be a single dataset or a list of datasets
  • Each tuning iteration trains across all provided datasets
  • Results are combined and fitness is averaged across datasets
  • Tuning decisions now reflect overall performance instead of overfitting to just one dataset

This is especially useful for teams training one model across different domains like:

  • color + grayscale data
  • multiple cameras or environments
  • blended internal/external datasets

A minimal example looks like this:

from ultralytics import YOLO

model = YOLO("yolo26n.pt")
model.tune(data=["coco8.yaml", "coco8-grayscale.yaml"], epochs=5, iterations=10)

:white_check_mark: Stability and reliability improvements

:shield: Safer training resume on small datasets

PR #24085 by @Y-T-G fixes a resume-related loss spike issue by keeping AdamW’s exp_avg_sq state in FP32. This helps reduce instability when continuing training from checkpoints, especially on smaller datasets.

:locked: Thread-safe ONNX export

PR #24092 by @glenn-jocher adds export locking to prevent collisions in PyTorch’s global ONNX exporter state when exports run concurrently. A regression test was also included for parallel export safety.

:gear: Core runtime robustness

Several fixes improve reliability across training and inference workflows:

:books: Docs and ecosystem updates

This release also includes a broad docs refresh with stronger YOLO26 coverage, updated edge guidance, and Platform improvements.

Edge and deployment docs

SAM documentation refresh

Ultralytics Platform improvements

Additional maintenance and docs updates

:bullseye: Why this release matters

This update improves three important parts of the workflow:

  • Better tuning quality: multi-dataset tuning helps produce models that generalize better across varied data sources
  • More dependable pipelines: resume training and ONNX export are now safer in real-world production setups
  • Better deployment guidance: refreshed YOLO26 and Jetson docs make edge decisions easier and more current

For new projects, YOLO26 remains the recommended model family, with smaller, faster, and more accurate models than YOLO11 across all supported tasks.

:raising_hands: New contributors

A big welcome to these first-time contributors :tada:

:link: Try it out

You can upgrade with:

pip install -U ultralytics

Then explore the full release in the v8.4.34 release page or browse every change in the full changelog from v8.4.33 to v8.4.34.

If you give v8.4.34 a try, we’d love to hear how it performs for your training, export, and deployment workflows :speech_balloon: