New Release: Ultralytics v8.4.35

Ultralytics v8.4.35 is out :tada:

Quick summary: Ultralytics v8.4.35 is a stability-first release focused on smarter training recovery, safer dataset caching, cleaner runtime behavior, and better compatibility across training and inference workflows. If you run long trainings, debug custom datasets, or deploy on edge/OpenVINO setups, this is a very worthwhile upgrade :rocket:

You can explore the full release on GitHub Releases and review every code change in the full changelog.


:glowing_star: Highlights

:repeat_button: Smarter NaN recovery during training

The biggest change in v8.4.35 comes from PR #24154 by @glenn-jocher with contributions from Glenn Jocher.

  • Training now recovers from last_good.pt instead of retrying a potentially corrupted last.pt
  • Checkpoint saving is skipped when EMA weights contain NaN or Inf
  • This helps prevent bad checkpoints from propagating through retries

Why it matters: fewer failed long runs, less wasted GPU time, and more trustworthy resume behavior :shield:

:card_index_dividers: Safer and clearer detection dataset caching

Also in PR #24154 by @glenn-jocher:

  • Empty detection caches are no longer written
  • Corrupt label failures now report clearer final errors
  • Detection dataset checks now accept a dataset directory directly, not just a YAML file
  • NDJSON-to-YOLO cached conversions are rebuilt when expected split folders are missing

Why it matters: debugging data issues is now faster and much less frustrating :test_tube:

:gear: More robust OpenVINO inference

PR #24156 by @glenn-jocher from @glenn-jocher improves OpenVINO runtime behavior by:

  • Making device fallback logic safer
  • Forcing FP32 inference on Linux ARM64 CPU for better compatibility and stability

Why it matters: more predictable deployment behavior, especially on edge and ARM systems :factory:


Improvements

:chart_increasing: Better experiment resume behavior

PR #24110 by @artest08 from @artest08 updates W&B resume behavior so resumed training continues the original run instead of creating a disconnected new one.

:brain: Better training config handling

PR #23640 by @artest08 from @artest08 prevents duplicate pretrained weight loading in Model.train() for YAML-based model configs.

:bell_with_slash: Cleaner logs

PR #24154 by @glenn-jocher throttles repeated Platform: Model not found warnings to reduce noisy logs during runs.

:wrench: Broader compatibility updates


Docs and ecosystem updates :books:

This release also includes several documentation improvements to better reflect the current Ultralytics YOLO ecosystem, with Ultralytics YOLO26 continuing as the recommended model line for new projects.


Bug fixes

A few additional fixes landed in this release as well:


New contributor :raising_hands:

A big welcome to @ausk, who made their first contribution in PR #24145! Thanks for helping improve the project :blue_heart:


Why upgrade?

v8.4.35 is a low-risk, high-value update that improves reliability without changing your overall workflow:

  • More trustworthy recovery when training encounters NaNs
  • Better dataset validation and clearer cache behavior
  • Cleaner logs during long runs
  • More stable OpenVINO inference on ARM and fallback scenarios
  • Smoother experiment tracking resumes

If you’re training frequently, validating custom datasets, or deploying production pipelines, this release should make day-to-day work noticeably smoother :white_check_mark:


Try it out

Update with:

pip install -U ultralytics

Then let us know how v8.4.35 works in your pipelines, especially around resume behavior, dataset validation, and OpenVINO deployment. Feedback, edge cases, and regression reports are always appreciated :folded_hands: