Ultralytics v8.3.198 — Smarter tuning, sturdier training, simpler exports 
Summary
Ultralytics v8.3.198 delivers a major upgrade to hyperparameter tuning with BLX-α crossover, unified metric plotting across tasks, safer training defaults, and multiple robustness fixes (NMS, DDP loss, Intel GPU checks). You’ll also find a cleaner export API and flexible torch.compile modes to fit your workflow. If you’re training or deploying YOLO today, this release is a win. ![]()
Highlights
- Smarter tuning: BLX-α crossover + adaptive mutation for faster convergence and stronger results.
- Unified metrics: Consistent
plot_resultsacross detect/segment/pose/classify. - Safer, sturdier: NMS correctness, DDP unwrap before loss, environment-safe Intel GPU checks.
- Simpler exports: Export functions now return just the output path string.
- Flexible compile:
compileaccepts True/False or mode strings across train/val/predict.
Tip: YOLO11 is the latest stable and recommended Ultralytics model. If you’re considering YOLO12 or YOLO13, we recommend sticking with YOLO11 for best accuracy-speed-stability.
New Features
- Hyperparameter Tuner (priority)
- Introduces BLX-α gene crossover for exploration across top parents, improving search quality, as detailed in PR #22038. This also adds adaptive mutation sigma, safer bounds,
close_mosaicin the search space, and uses only mAP@0.5:0.95 for fitness to align with common benchmarks. Runtime hygiene cleans GPU memory between iterations and improves resume/CSV/Mongo handling.
- Introduces BLX-α gene crossover for exploration across top parents, improving search quality, as detailed in PR #22038. This also adds adaptive mutation sigma, safer bounds,
Improvements
- Exporter API cleanup
- Export functions now return just the output path string (TF SavedModel also returns the Keras model) for lighter, cleaner pipelines, implemented in PR #22009.

- Export functions now return just the output path string (TF SavedModel also returns the Keras model) for lighter, cleaner pipelines, implemented in PR #22009.
- torch.compile flexibility
compilenow supports True/False or mode strings"default" | "reduce-overhead" | "max-autotune"across train/val/predict, thanks to PR #21999.
- Unified results plotting
plot_resultsauto-detects metrics/losses and works for all tasks with logic centralized inBaseTrainer, as unified in PR #22026.
- Training robustness
- DDP/compiled models are unwrapped before loss calculation to avoid wrapper-related issues, addressed in PR #22016.

- DDP/compiled models are unwrapped before loss calculation to avoid wrapper-related issues, addressed in PR #22016.
- NMS correctness
- Fixed early-exit and sorting in TorchNMS to reduce false positives and improve stability, delivered in PR #22014.

- Fixed early-exit and sorting in TorchNMS to reduce false positives and improve stability, delivered in PR #22014.
Bug Fixes and Stability
- Segmentation
- Environment resilience
- Docs & configs
- Clearer configs and tracker YAMLs, improved compile args docs, corrected detection boxes column order (track_id position), and a lighter Quickstart (Seaborn no longer required), improved across PRs #22011, #22028, and #22035.
- The Construction-PPE dataset docs now include a “Business Value” section to help justify real-world ROI, added in PR #22029.


Quick Examples
- Tuning with the improved Tuner:
from ultralytics import YOLO
model = YOLO("yolo11s.yaml")
model.tune(
device=0,
data="coco128.yaml",
optimizer="AdamW",
epochs=100,
batch=8,
compile=False,
plots=False,
val=False,
save=False,
workers=16,
project="tune-yolo11s-scratch-coco128-100e",
iterations=1000,
)
- Using compile modes:
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.train(data="coco8.yaml", epochs=3, compile="reduce-overhead") # or "default", "max-autotune", True/False
- Export now returns a file path:
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
onnx_path = model.export(format="onnx") # 'onnx_path' is a string
Upgrade
- Update to the latest release with:
pip install -U ultralytics
- You can read the full release notes in the v8.3.198 entry on the Ultralytics Releases page, available via the dedicated link to the v8.3.198 release.
What’s Changed (PRs)
- Clean up
Exporterand remove unnecessaryNoneplaceholder in PR #22009 by Laughing-q. - Allow settable compile mode for
torch.compilein PR #21999 by Laughing-q. - Improve config YAMLs in PR #22011 by glenn-jocher.
- Unwrap DDP model before loss calculation in PR #22016 by Y-T-G.
- Fix
TorchNMS.nms()early exit logic in PR #22014 by Y-T-G. - Revise detection boxes argument documentation in PR #22028 by daniel-mooney.
- Fix mask plotting when number of objects equals number of images in a batch in PR #22031 by Y-T-G.
- Remove Seaborn from manual installation dependencies in Quickstart docs via PR #22035 by onuralpszr.
- Catch all exceptions during Intel GPU discovery in PR #22034 by Y-T-G.
- Add Business Value section to Construction-PPE dataset docs in PR #22029 by UltralyticsAbi.
- Reset labels cache when loading with incompatible NumPy version in PR #22025 by Y-T-G.
- Faster length retrieval for
torch.Tensorusingshape[0]in PR #22021 by Laughing-q. - Clean up and unify
plot_resultsacross detect/segment/pose/obb/classify in PR #22026 by Laughing-q. - Fix mask shape mismatch when validating with
mask_ratio=1in PR #22037 by Laughing-q. - Improve Tuner with BLX-α gene crossover in PR #22038 by glenn-jocher.
New Contributors
- We’re excited to welcome daniel-mooney, who contributed via PR #22028.
- A warm welcome to UltralyticsAbi, who contributed via PR #22029.
Try it and Share Feedback
- You can explore the complete diff in the full changelog for v8.3.198, and if you spot anything, please start a conversation in Ultralytics Discussions or open an issue on the repository.
- Huge thanks to our community and contributors for making each release better—your feedback helps us refine YOLO for everyone.
