New Release: Ultralytics v8.3.192

Ultralytics v8.3.192 β€” Distributed Hyperparameter Tuning, Faster Pipelines, and Smarter ETAs :rocket:

A quick heads-up for busy readers: v8.3.192 introduces distributed hyperparameter tuning with optional MongoDB Atlas coordination, steadier progress ETAs, faster GPU data transfers, and more reliable CSV parsing. Documentation also now clearly explains why legacy YOLOv5 models are not compatible with the Ultralytics library. YOLO11 remains our latest stable and recommended model for all use cases. :raising_hands:

Highlights :sparkles:

  • Scalable HPO across machines with optional MongoDB Atlas
  • More stable progress bars with improved ETA
  • Non-blocking GPU transfers across tasks for better throughput
  • Safer, more accurate CSV parsing for plots and metrics
  • Clear guidance for users coming from ultralytics/yolov5

New Features

  • Distributed Hyperparameter Tuning
    • Coordinate tuning across machines using MongoDB Atlas to share runs and results.
    • New arguments: mongodb_uri, mongodb_db (default ultralytics), and mongodb_collection (default tuner_results).
    • Shared-state logic reads best runs from MongoDB, writes new runs back, and syncs to CSV for plotting and resume.
    • Early stopping triggers when the shared collection reaches the target iterations.
    • Stronger mutation logic, safer bounds, and more robust resume behavior.
    • Updated docs and examples to help you get started quickly, including the Hyperparameter Tuning guide.

Improvements

  • Progress ETA (tqdm) :stopwatch:
    • More stable remaining-time estimates reduce jitter in progress bars for a smoother training experience.
  • Non-blocking GPU transfers :high_voltage:
    • Applied .to(device, non_blocking=True) across train/val for Classification, Detection, Pose, Segmentation, YOLO-World (text embeddings), and YOLOE (text/visual prompts) to reduce data-transfer bottlenecks and improve GPU utilization.
  • CSV parsing reliability :chart_increasing:
    • Polars now infers schema from entire files by setting infer_schema_length=None, minimizing dtype mix-ups across dataset conversion, training results, and plotting. This also benefits exports generated from Ultralytics HUB.

Docs Update

  • Legacy YOLOv5 compatibility warning :warning:
    • Models trained in ultralytics/yolov5 are not compatible with the Ultralytics library. The documentation now clarifies the differences and migration considerations, including the anchor-free YOLOv5u variant maintained in Ultralytics.

Quick Start: Distributed Tuning on YOLO11

from ultralytics import YOLO

model = YOLO("yolo11n.pt")
model.tune(
    data="coco8.yaml",
    epochs=10,
    iterations=300,
    mongodb_uri="mongodb+srv://user:pass@cluster.mongodb.net/",
    mongodb_db="ultralytics",
    mongodb_collection="tuner_results",
    plots=False,
    save=False,
    val=False,
)

Upgrade with pip install -U ultralytics and start tuning with YOLO11 for best results.

What’s Changed

Resources

Call for Feedback :speech_balloon:

Please upgrade, give distributed tuning a spin on YOLO11, and let us know how it works for your workflows. Share results, suggestions, and issues so we can keep refining the experience together. Huge thanks to the YOLO community and the Ultralytics team for making this release possible!