hi guys , i have encountered a problem that my val_batch_pred is giving true result while val_batch_labels is not.
This looks like the ground truth annotations are not correct. Have you verified that they use the correct YOLO format? Object Detection Datasets Overview - Ultralytics YOLO Docs
yes they seem to be true. here is my training code:
def main():
torch.cuda.set_device(0)
model = YOLO("yolov8n.yaml").load("yolov8n.pt")
results = model.train(data='config.yaml', epochs=250, imgsz=640, device='gpu', batch=64, dropout=0.2, weight_decay=0.002, lr0=0.001, lrf=0.001)
The val_batch_labels
are the labels from your ground truth annotations. I just tested with the coco8.yaml
dataset and the labels are okay
which likely means there is no issue with the annotation plotting. This would mean that the
val_batch_labels
are likely showing you that your ground truth annotations are incorrect, and need to be fixed.
You can perform the same test I did with the following code:
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
result = model.train(data="coco8.yaml", epochs=3)
Then navigate to the runs/detect/train
directory shown at the end of training and inspect the val_batch_labels.png
(there will only be one). If it doesn’t look like the one above, then you should consider upgrading you version of ultralytics
as there may have been a bug in the version you have. My test was run using ultralyitcs==8.2.63
thank you so much!
are val_batch_labels
the exact labels that i have put on the images for validation?
Yes, val_batch_labels.jpg
are the labels for the data in images/val
and labels/val
where val_batch_pred.jpg
are the model predictions