How to monitor YOLO classification training progress in real time (TensorBoard not working)

Hi,
I’m training a YOLO classification model (yolo11n-cls) like this:

results = model.train(
    data=base_dir,        # Dataset root directory
    epochs=150,           # Number of epochs
    imgsz=imgsz,          # Image size
    patience=30,          # Early stopping patience
    batch=batch,          # Batch size
    device=device,        # GPU device
    project=PROJECT,      # Project folder
    name=NAME,            # Run name
    cos_lr=True
)

I know I can check results.csv or the generated spreadsheet after training, but I’d like to monitor the metrics while the training is still running — accuracy, loss, etc.

TensorBoard doesn’t seem to work for classification training (I get 'tensorboard' is not a valid YOLO argument, and no .tfevents file are generated during the training even if I specified !yolo settings tensorboard=True).

  • Is there a way to see the progress in real time?

  • If I stop training early, will the results.csv (or other logs) still be complete, or will I only get best.pt?

  • What’s the recommended approach to decide whether to interrupt training or let it run to the end?


Hello, thanks for reaching out with your questions. It’s great to hear you’re diving into training classification models.

To address the issue with TensorBoard, please ensure you are using the latest version of the ultralytics package by running pip install -U ultralytics. TensorBoard is fully supported for all tasks, including classification.

The error message you’re seeing suggests you might be passing tensorboard as an argument to the .train() method, which is not a valid parameter. The correct approach is to enable it globally before running your script using the command yolo settings tensorboard=True. Once enabled, you should see a message at the start of your training prompting you to launch TensorBoard from your terminal. Our guide on gaining visual insights with YOLO11’s integration with TensorBoard provides detailed steps for setup and usage.

Regarding your other questions:

  1. If you interrupt training, the results.csv file will be saved with the complete data for all epochs that have finished running.
  2. Your use of the patience argument is the recommended approach for deciding when to stop training. It automatically ends the process when the validation metrics no longer improve, preventing overfitting. By monitoring these metrics in real-time with a tool like TensorBoard, you can visually identify when the validation accuracy curve begins to plateau or decline, confirming that the model has reached its optimal state.

I hope this helps clarify things! Let us know if you have any other questions.

At the minimum, there should be a print out in the terminal that shows the metrics for each epoch during training.

Additionally, you can use several different logging platforms if you prefer, check the docs here Model Training with Ultralytics YOLO - Ultralytics YOLO Docs and also take a look at the integrations for more details on each Ultralytics Integrations - Ultralytics YOLO Docs

I tested it and TensorBoard logging does work with classification training. The training logs also show how to open the TensorBoard page before the training starts:

TensorBoard: Start with 'tensorboard --logdir /ultralytics/runs/classify/train', view at ``http://localhost:6006/

You don’t prefix the command with yolo. You got that error because you prefixed it with yolo. It’s exactly the command that’s shown in the logs.

Also after running !yolo settings tensorboard=True, you need to restart your notebook kernel for changes to take effect. It won’t work if you ran it after you already imported Ultralytics.