A strange phenomenon occurred when I was training yolov8’s classification model. My samples are divided into smiley faces and non-smiley faces. The number of smiley faces and non-smiley faces is the same, both being 11w, and the samples have been carefully checked. The samples are almost clean and not mixed with other samples. The smallest image size of the sample is not less than 29 pixels, and the largest is not more than 1080 pixels. By the 11th round of training, the top acc was 0.97. But when I used my trained model to reason, it didn’t recognize either a smiley face or a non-smiley face. When I trained with 20,000 images before, the effect was OK, and the reasoning code was the same. To make the model better, I increased the sample to 110,000, and the reasoning code was still used before. Excuse me, what is the reason for this? I’m open to any suggestions, thank you very much!
Here’s my code:
from ultralytics import YOLO
if name == ‘main’:
model_path = “/root/yolo_train/yolov8s-cls.pt”
model=YOLO(model_path)
model.train(
Data file path, such as coco128.yaml, the classification task can directly write data set folder path
data=‘/root/datasets/Smile_data’,
imgsz=224, # image size
device=0, # Run the device
batch=128,# Number of images per batch (-1 indicates automatic batch)
Scale = 0.3,
Degrees = 0.2,
Hsv_s = 0.3,
Hsv_v = 0.3,
epochs=40,# Number of training cycles
Flipud = 0.3,
)