YOLOv11m Model Missing Objects at High Confidence Thresholds Compared to YOLOv7 – Need Help Optimizing Detection

I’ve recently upgraded my object detection project from YOLOv7 to YOLOv11m to take advantage of the newer model’s capabilities. In YOLOv7, I achieved good detection accuracy with a confidence threshold of 0.7, and it performed well, detecting all relevant objects on my custom dataset.

Now, I trained the YOLOv11m model on the same dataset, using the pretrained weights (yolov11m.pt) and set the confidence threshold (conf) to 0.7 and the IoU threshold (iou_thresh) to 0.45. However, with YOLOv11m, many objects that were easily detected in YOLOv7 are now missed—even when I test on the training dataset.

Only when I reduce the confidence threshold to 0.3-0.4 does YOLOv11m begin to detect objects.

Why does YOLOv11m fail to detect objects at a high confidence threshold like 0.7, when YOLOv7 was able to do so without issues? Are there any adjustments or techniques I can use to improve YOLOv11m’s detection performance at higher confidence levels?

Additional Details:

  • Dataset: Same dataset used in YOLOv7 and YOLOv11m training.
  • Model: YOLOv11m trained with yolov11m.pt pretrained weights.
  • Testing Configuration: conf = 0.7 (misses objects), iou_thresh = 0.45.
  • Current Solution: Lowering conf to 0.3-0.4, which allows detection but with more noise.

Any insights or optimization tips to make YOLOv11m detect more accurately at conf = 0.7 would be greatly appreciated!

When model structures change, it’s possible that even though the model performed better on the COCO dataset, that it could be the opposite on a custom dataset. There are lots of reasons this could occur and to know for certain why, it would take a significant amount of investigation.

There are a few things you could try:

  1. Try using a different model scale, such as YOLO11s or YOLO11l.
  2. If objects are relatively small (compared to the overall image area), try increasing the inference imgsz which defaults at imgsz=640 so you could try something like imgsz=800 or so. Conversely, if objects are large relative to the image area, try a lower value for imgsz instead.
  3. Consider using the pretrained model as a starting point and train against your custom dataset. In most cases, this step would likely be necessary, as most use cases will have subtle differences compared to a pretrained model.
  4. Consider adjusting both conf and iou values to better fit your use case. The default for each is conf=0.25 and iou=0.7 which are outlined in this table from the docs.

Thank you @BurhanQ for your reply.

  1. Try using a different model scale, such as YOLO11s or YOLO11l. - First i try with YOLO11m then i try YOLO11n both are same size (640) & same result only.
  2. If objects are relatively small - Objects are not small its person,crowd,bike,car,…