New Release: Ultralytics v8.3.212

Ultralytics v8.3.212 — Robust training, cleaner optimizer, faster CI :rocket:

Quick summary: v8.3.212 focuses on making training runs more resilient and predictable by hardening the Trainer against edge cases and simplifying the optimizer step for modern PyTorch. YOLO11 remains the recommended default for all use cases.

:glowing_star: Summary

  • Stronger Trainer stability and I/O resilience to keep long runs on track.
  • Simplified and unified optimizer step with AMP GradScaler for PyTorch 2.x.
  • Faster CI-only install improvements using uv (no user-facing behavior change).

:sparkles: New Features

  • Safer metrics reading: read_results_csv() now returns {} on failures instead of raising, so tools and scripts can proceed gracefully.
  • Safer saving: the Trainer ensures directories exist before writing weights and logs (e.g., best.pt, last.pt, results.csv).

Details are in the PR Improve Trainer robustness to save_dir deletion by Glenn Jocher.

:wrench: Improvements

  • Unified optimizer behavior: always use AMP GradScaler; unscale, clip gradients with clip_grad_norm_(max_norm=10.0), scaler.step(), scaler.update(), then zero grads. See the PR Revert optimizer_step() nan error changes by Glenn Jocher.
  • Trainer stability updates:
    • Removed the non-finite loss guard to avoid silent skip behavior; GradScaler safely handles invalid grads.
    • Timed stopping remains unchanged and synchronized across DDP ranks.
    • Resilient file handling if save_dir is deleted mid-run or on flaky filesystems.
  • CI-only speedup: docker tests now install with uv for faster, deterministic runs via the PR Use uv for docker.yml pip install tests by Glenn Jocher.

:lady_beetle: Bug Fixes

  • No specific user-reported bugs addressed; this release targets robustness and predictability for training across edge cases.

:bullseye: Why it matters

  • More resilient training runs: avoids stalls or inconsistent behavior from transient NaNs/Infs by relying on GradScaler, and keeps timed stopping consistent in DDP.
  • Robust file I/O: automatically recreates missing directories and avoids crashes when reading locked or missing results.csv.
  • Cleaner, modern codebase for PyTorch 2.x: fewer legacy branches and a simpler optimizer step.

:light_bulb: Tip: reading metrics safely

If you consume training metrics programmatically, read_results_csv() now returns {} on failures:

from ultralytics.engine.trainer import BaseTrainer

trainer = BaseTrainer(args={})
metrics = trainer.read_results_csv()  # returns {} on failure instead of raising

:test_tube: Try the new release

  • Upgrade: pip install -U ultralytics
  • Train with the latest recommended model (YOLO11) and enjoy more predictable runs.
  • Share feedback or issues so we can continue improving the experience.

:package: What’s Changed (v8.3.212)

Explore the full diff in the v8.3.212 full changelog, and grab the build from the v8.3.212 release page.

We appreciate your continued feedback and contributions—this release is a direct result of community input and testing. Happy training! :fire: