Ultralytics v8.3.192 β Distributed Hyperparameter Tuning, Faster Pipelines, and Smarter ETAs 
A quick heads-up for busy readers: v8.3.192 introduces distributed hyperparameter tuning with optional MongoDB Atlas coordination, steadier progress ETAs, faster GPU data transfers, and more reliable CSV parsing. Documentation also now clearly explains why legacy YOLOv5 models are not compatible with the Ultralytics library. YOLO11 remains our latest stable and recommended model for all use cases.
Highlights 
- Scalable HPO across machines with optional MongoDB Atlas
- More stable progress bars with improved ETA
- Non-blocking GPU transfers across tasks for better throughput
- Safer, more accurate CSV parsing for plots and metrics
- Clear guidance for users coming from ultralytics/yolov5
New Features
- Distributed Hyperparameter Tuning
- Coordinate tuning across machines using MongoDB Atlas to share runs and results.
- New arguments:
mongodb_uri
,mongodb_db
(defaultultralytics
), andmongodb_collection
(defaulttuner_results
). - Shared-state logic reads best runs from MongoDB, writes new runs back, and syncs to CSV for plotting and resume.
- Early stopping triggers when the shared collection reaches the target
iterations
. - Stronger mutation logic, safer bounds, and more robust resume behavior.
- Updated docs and examples to help you get started quickly, including the Hyperparameter Tuning guide.
Improvements
- Progress ETA (tqdm)
- More stable remaining-time estimates reduce jitter in progress bars for a smoother training experience.
- Non-blocking GPU transfers
- Applied
.to(device, non_blocking=True)
across train/val for Classification, Detection, Pose, Segmentation, YOLO-World (text embeddings), and YOLOE (text/visual prompts) to reduce data-transfer bottlenecks and improve GPU utilization.
- Applied
- CSV parsing reliability
- Polars now infers schema from entire files by setting
infer_schema_length=None
, minimizing dtype mix-ups across dataset conversion, training results, and plotting. This also benefits exports generated from Ultralytics HUB.
- Polars now infers schema from entire files by setting
Docs Update
- Legacy YOLOv5 compatibility warning
- Models trained in ultralytics/yolov5 are not compatible with the Ultralytics library. The documentation now clarifies the differences and migration considerations, including the anchor-free YOLOv5u variant maintained in Ultralytics.
Quick Start: Distributed Tuning on YOLO11
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.tune(
data="coco8.yaml",
epochs=10,
iterations=300,
mongodb_uri="mongodb+srv://user:pass@cluster.mongodb.net/",
mongodb_db="ultralytics",
mongodb_collection="tuner_results",
plots=False,
save=False,
val=False,
)
Upgrade with pip install -U ultralytics
and start tuning with YOLO11 for best results.
Whatβs Changed
- Improved CSV reading performance by disabling schema inference in Polars, contributed in PR #21909 by @onuralpszr. Review the change in the detailed discussion found under improve CSV reading performance.
- Non-blocking GPU transfers added for additional train/val loaders in PR #21912 by @glenn-jocher. You can explore the implementation in the conversation titled add non_blocking=True across loaders.
- Documentation now warns that legacy YOLOv5 models are not compatible with Ultralytics, added in PR #21915 by @Y-T-G. The change is outlined in add YOLOv5 compatibility warning.
- Distributed Hyperparameter Tuning with MongoDB Atlas integration shipped in PR #21882 by @glenn-jocher. Dive into the full feature details in Distributed HPO with MongoDB Atlas.
Resources
- To set up and scale tuning across machines, follow the steps in the Hyperparameter Tuning guide.
- If you are migrating from ultralytics/yolov5, review the YOLOv5 compatibility note to avoid loading errors and plan a smooth transition.
- For a deeper dive, read the v8.3.192 release notes, and explore every commit in the compare view from v8.3.191 to v8.3.192.
Call for Feedback 
Please upgrade, give distributed tuning a spin on YOLO11, and let us know how it works for your workflows. Share results, suggestions, and issues so we can keep refining the experience together. Huge thanks to the YOLO community and the Ultralytics team for making this release possible!