Ultralytics v8.4.30 Released 
Quick summary: Ultralytics v8.4.30 is a focused stability release that makes training resume much more reliable after interruptions, crashes, or restarts ![]()
. There are no new model architectures in this release, but if you train frequently, especially on long-running jobs, this update should make your workflows more dependable.
Why this release matters
This release improves how Ultralytics YOLO restores training state from checkpoints when using resume. The goal is simple: fewer failed restarts and more predictable recovery behavior.
That makes v8.4.30 especially useful for:
- Long-running training jobs on GPU servers
- Cloud and preemptible environments
- Automated pipelines and scheduled retraining
- Teams that depend on checkpoint-based recovery
New Features
While this is not a feature-heavy release, it includes an important training reliability upgrade:
- Improved resume flow so checkpoint arguments are restored earlier and more consistently
- Safer dataset fallback when a saved checkpoint dataset path is no longer valid
- Preserved override support for practical resume-time settings like
imgsz,batch,device,workers,cache,freeze,val, andplots - Clear warning behavior retained for custom
augmentations, which still need to be passed again manually when resuming
Improvements
More robust checkpoint restoration 
The main change in this release comes from PR #24027 by @glenn-jocher, which refactors resume handling in trainer.py.
Highlights include:
- Resume now loads checkpoint configuration from
last.ptearlier in the flow - Runtime arguments are rebuilt more reliably from the checkpoint
- If a dataset path stored in the checkpoint is invalid, the current
dataargument is used as a fallback instead of failing unexpectedly
This should significantly reduce resume-related issues in interrupted training runs.
Bug Fixes
- Fixed training resume behavior to better restore saved training arguments
- Reduced failures caused by stale or missing dataset paths in checkpoints
- Improved continuity for restarted jobs in automated or remote environments
Version Update
- Package version bumped from
8.4.29to8.4.30
Impact for users
If you are training Ultralytics YOLO models regularly, this release should help make restart behavior smoother and more predictable.
In particular:
- Researchers get fewer disruptions during long experiments
- Production teams get more reliable recovery in training pipelines
- Platform users benefit from better continuity across cloud and local workflows, including projects managed through the Ultralytics Platform
For new projects, we continue to recommend Ultralytics YOLO26, our latest stable model family. YOLO11 remains fully supported, but YOLO26 is the best default choice for new training and deployment workflows.
Try it out
Upgrade with:
pip install -U ultralytics
Then resume training as usual and verify the improved recovery behavior in your environment.
Links
You can explore the full details in the v8.4.30 release page, review all changes in the full changelog from v8.4.29 to v8.4.30, and see the main fix in PR #24027 by @glenn-jocher.
Feedback
Please give v8.4.30 a try and let us know how it works for your training workflows ![]()
If this update improves your resume experience, or if you spot any edge cases, we’d love your feedback from the community.