I’m using a custom training callback with the model.train() API and would like to access the same training logs/metrics that are printed or logged during training (e.g., loss values, metrics per epoch/iteration).
At the moment, the callback system seems to only expose event hooks (e.g., on_train_epoch_end, on_fit_epoch_end, etc.), but not the actual training logs or metrics dictionary that Ultralytics internally generates and prints.
What I’m trying to do
I want to log or process training statistics (e.g., losses, metrics, MaP scores) inside a custom callback, for example to:
-
Send metrics to an external logger
-
Store custom per-epoch analytics
-
Trigger logic based on loss trends
Current limitation
Inside the callback, I don’t see a clean way to access the same values that appear in the training output, such as:
It appears that:
1 Like
You can get the same numbers that show up in the console from inside a callback — Ultralytics already does this in the built-in logger integrations by reading fields off the trainer object and composing a metrics dict.
At epoch end, the “current log dict” is effectively:
- train losses:
trainer.label_loss_items(trainer.tloss, prefix="train")
- val metrics (mAP, etc.):
trainer.metrics
- learning rates:
trainer.lr
This is exactly what the DVCLive callback does in on_fit_epoch_end and what the ClearML/W&B callbacks follow as well (see the patterns in the clearml.py reference and wb.py reference).
from ultralytics import YOLO
def on_fit_epoch_end(trainer):
# after validation, so trainer.metrics is populated
metrics = {
**trainer.label_loss_items(trainer.tloss, prefix="train"),
**trainer.metrics,
**trainer.lr,
}
# send metrics wherever you want
print(metrics)
model = YOLO("yolo11n.pt")
model.add_callback("on_fit_epoch_end", on_fit_epoch_end)
model.train(data="coco8.yaml", epochs=3)
If you need per-iteration values, use a batch hook like on_train_batch_end and read whatever the trainer exposes at that point (it’s more “internal” and may change across versions), but for stable logging I’d recommend sticking to on_train_epoch_end / on_fit_epoch_end.
Also worth confirming you’re on the latest package since callback event coverage has improved over time: pip install -U ultralytics.
1 Like