Issues with Low mAP Scores During COCO Evaluation Using YOLOv8 ONNX Models

Hello,

I am new to YOLO and currently working on a project using YOLOv8 ONNX models. My initial step was to run inference on the standard COCO validation dataset to obtain mAP scores. For this, I utilized the YOLOv8-ONNXRuntime example provided by Ultralytics at this link. (ultralytics/examples/YOLOv8-ONNXRuntime/main.py at main · ultralytics/ultralytics · GitHub). I made some modifications to the code to perform inference on the COCO validation set, which is shared below.

I tested the modified code by drawing bounding boxes on sample images from the COCO dataset, and the results seemed to work well. However, when I performed COCO evaluation, the mAP scores I obtained were unexpectedly low. I am unsure where the issue lies, although I believe I am correctly formatting the detections in COCO format. Any guidance or assistance would be greatly appreciated.

# Adopted from https://github.com/ultralytics/ultralytics/blob/main/examples/YOLOv8-ONNXRuntime/main.py


import argparse
import os
import cv2
import numpy as np
import onnxruntime as ort
import torch
from pathlib import Path
from ultralytics.utils import ASSETS, yaml_load
from ultralytics.utils.checks import check_requirements, check_yaml

from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import json

class YOLOv8:
    """YOLOv8 object detection model class for handling inference and visualization."""

    def __init__(self, onnx_model, input_image, confidence_thres, iou_thres):
        """
        Initializes an instance of the YOLOv8 class.

        Args:
            onnx_model: Path to the ONNX model.
            input_image: Path to the input image.
            confidence_thres: Confidence threshold for filtering detections.
            iou_thres: IoU (Intersection over Union) threshold for non-maximum suppression.
        """
        self.onnx_model = onnx_model
        self.input_image = input_image
        self.confidence_thres = confidence_thres
        self.iou_thres = iou_thres

        # Load the class names from the COCO dataset
        self.classes = yaml_load(check_yaml("coco8.yaml"))["names"]

        # Generate a color palette for the classes
        self.color_palette = np.random.uniform(0, 255, size=(len(self.classes), 3))

    def draw_detections(self, img, box, score, class_id):
        """
        Draws bounding boxes and labels on the input image based on the detected objects.

        Args:
            img: The input image to draw detections on.
            box: Detected bounding box.
            score: Corresponding detection score.
            class_id: Class ID for the detected object.

        Returns:
            None
        """
        # Extract the coordinates of the bounding box
        x1, y1, w, h = box

        # Retrieve the color for the class ID
        color = self.color_palette[class_id]

        # Draw the bounding box on the image
        cv2.rectangle(img, (int(x1), int(y1)), (int(x1 + w), int(y1 + h)), color, 2)

        # Create the label text with class name and score
        label = f"{self.classes[class_id]}: {score:.2f}"

        # Calculate the dimensions of the label text
        (label_width, label_height), _ = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)

        # Calculate the position of the label text
        label_x = x1
        label_y = y1 - 10 if y1 - 10 > label_height else y1 + 10

        # Draw a filled rectangle as the background for the label text
        cv2.rectangle(
            img, (label_x, label_y - label_height), (label_x + label_width, label_y + label_height), color, cv2.FILLED
        )

        # Draw the label text on the image
        cv2.putText(img, label, (label_x, label_y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1, cv2.LINE_AA)

    def preprocess(self):
        """
        Preprocesses the input image before performing inference.

        Returns:
            image_data: Preprocessed image data ready for inference.
        """
        # Read the input image using OpenCV
        self.img = cv2.imread(self.input_image)

        # Get the height and width of the input image
        self.img_height, self.img_width = self.img.shape[:2]

        # Convert the image color space from BGR to RGB
        img = cv2.cvtColor(self.img, cv2.COLOR_BGR2RGB)

        # Resize the image to match the input shape
        img = cv2.resize(img, (self.input_width, self.input_height))

        # Normalize the image data by dividing it by 255.0
        image_data = np.array(img) / 255.0

        # Transpose the image to have the channel dimension as the first dimension
        image_data = np.transpose(image_data, (2, 0, 1))  # Channel first

        # Expand the dimensions of the image data to match the expected input shape
        image_data = np.expand_dims(image_data, axis=0).astype(np.float32)

        # Return the preprocessed image data
        return image_data

    def postprocess(self, input_image, output, img_id):
        """
        Performs post-processing on the model's output to extract bounding boxes, scores, and class IDs.

        Args:
            input_image (numpy.ndarray): The input image.
            output (numpy.ndarray): The output of the model.

        Returns:
            numpy.ndarray: The input image with detections drawn on it.
        """
        # Transpose and squeeze the output to match the expected shape
        outputs = np.transpose(np.squeeze(output[0]))

        # Get the number of rows in the outputs array
        rows = outputs.shape[0]

        # Lists to store the bounding boxes, scores, and class IDs of the detections
        boxes = []
        scores = []
        class_ids = []
        boxes_f = []

        # Calculate the scaling factors for the bounding box coordinates
        x_factor = self.img_width / self.input_width
        y_factor = self.img_height / self.input_height

        # Iterate over each row in the outputs array
        for i in range(rows):
            # Extract the class scores from the current row
            classes_scores = outputs[i][4:]

            # Find the maximum score among the class scores
            max_score = np.amax(classes_scores)

            # If the maximum score is above the confidence threshold
            if max_score >= self.confidence_thres:
                # Get the class ID with the highest score
                class_id = np.argmax(classes_scores)

                # Extract the bounding box coordinates from the current row
                x, y, w, h = outputs[i][0], outputs[i][1], outputs[i][2], outputs[i][3]

                # Calculate the scaled coordinates of the bounding box
                left_f = float((x - w / 2) * x_factor)
                top_f = float((y - h / 2) * y_factor)
                width_f = float(w * x_factor)
                height_f = float(h * y_factor)

                left = int((x - w / 2) * x_factor)
                top = int((y - h / 2) * y_factor)
                width = int(w * x_factor)
                height = int(h * y_factor)

                # Add the class ID, score, and box coordinates to the respective lists
                class_ids.append(class_id)
                scores.append(max_score)
                boxes.append([left, top, width, height])
                boxes_f.append([left_f, top_f, width_f, height_f])

        # Apply non-maximum suppression to filter out overlapping bounding boxes
        indices = cv2.dnn.NMSBoxes(boxes_f, scores, self.confidence_thres, self.iou_thres)

        boxes_nms, score_nms, class_ds_nms = [], [], []
        # Iterate over the selected indices after non-maximum suppression
        for i in indices:
            # Get the box, score, and class ID corresponding to the index
            boxes_nms.append(boxes_f[i])
            score_nms.append(scores[i])
            class_ds_nms.append(class_ids[i])

            #Draw the detection on the input image
            #self.draw_detections(input_image, boxes[i], scores[i], class_ids[i])

        '''ouptut_file = Path(f"./output/{img_id}.jpg")
        cv2.imwrite(ouptut_file, input_image)'''

        # Return the modified input image
        return boxes_nms, score_nms, class_ds_nms

    def main(self):
        """
        Performs inference using an ONNX model and returns the output image with drawn detections.

        Returns:
            output_img: The output image with drawn detections.
        """
        # Create an inference session using the ONNX model and specify execution providers
        session = ort.InferenceSession(self.onnx_model)

        # Get the model inputs
        model_inputs = session.get_inputs()

        # Store the shape of the input for later use
        input_shape = model_inputs[0].shape
        self.input_width = input_shape[2]
        self.input_height = input_shape[3]

        
        #for image_id in self.image_ids:
        # Preprocess the image data
        img_data = self.preprocess()

        # Run inference using the preprocessed image data
        outputs = session.run(None, {model_inputs[0].name: img_data})

        # Perform post-processing on the outputs to obtain output image.
        boxes, scores, class_ids = self.postprocess(self.img, outputs, image_id)  # output image

        return boxes, scores, class_ids

if __name__ == "__main__": 
    # Create an argument parser to handle command-line arguments
    parser = argparse.ArgumentParser()
    parser.add_argument("--model", type=str, default="./models/yolov8n.onnx", help="Input your ONNX model.")
    parser.add_argument("--img", type=str, default="sample_img.jpg", help="Path to input image.")
    parser.add_argument("--conf-thres", type=float, default=0.5, help="Confidence threshold")
    parser.add_argument("--iou-thres", type=float, default=0.7, help="NMS IoU threshold")
    args = parser.parse_args()

    # Check the requirements and select the appropriate backend (CPU or GPU)
    check_requirements("onnxruntime-gpu" if torch.cuda.is_available() else "onnxruntime")

    # Create an instance of the YOLOv8 class with the specified arguments

    coco_json_path = "./datasets/coco2017/annotations/instances_val2017.json"
    image_dir = "./datasets/coco2017/val2017/"
    coco = COCO(coco_json_path)
    image_ids = coco.getImgIds()
    detections = []
    #image_ids = image_ids[2120:2131]

    for image_id in image_ids:

        img_info = coco.loadImgs(image_id)[0]
        image_path = os.path.join(image_dir, img_info['file_name'])

        detection = YOLOv8(args.model, image_path, args.conf_thres, args.iou_thres)

        # Perform object detection and obtain the output image
        #output_image = detection.main()
        boxes, scores, class_ids = detection.main()

        for box, score, class_id in zip(boxes, scores, class_ids):

            detections.append(
                {
                    "image_id": image_id,
                    "category_id": int(class_id) + 1,
                    "bbox": box,
                    "score": float(score),
                }
            )

    # write detections into a json file
    with open("./COCO_detectins.json", "w") as f:
        json.dump(detections, f, indent=4)
    
    anno = COCO(coco_json_path)  # init annotations api
    pred = anno.loadRes("./COCO_detectins.json")  # init predictions api (must pass string, not Path)
    val = COCOeval(anno, pred, "bbox")

    val.evaluate()
    val.accumulate()
    val.summarize()

I got below COCO evaluation results.

Have you tried the far simpler way to do this?

from ultralytics import YOLO 

model = YOLO("yolov8n.onnx", task="detect")
results = model.val(data="coco yaml", save_json=True)

As long as pycocotools is installed, it will also run the COCO eval.

Also, the coco8.yaml only contains 8 images, where the coco.yaml contains 5000 validation images. If you’re using a custom trained ONNX model, it’s likely you’ll need to train for longer and/or collect more data to train on.

@BurhanQ Thank for the response. Yes, i tried the way you are suggesting. However, I wanted to implement it my self instead of directly using the ultralytics python APIs for learning purpose. And, I am not using custom trained ONNX model. I am directly using the yolov8n.onnx from ultralytics. I was wondering I may be doing something wrong in the prediction and validation parts of COCO since the model is working well on sample coco images as I mentioned in the post.

If you’re looking to learn, instead of implementing something new from scratch, you might try reviewing the steps in the source code for validation. I appreciate your motivation and ambition in trying to implement something on your own, but that’s a lot of custom code to try to diagnose and I don’t know if there will be many that will have time to help troubleshoot it.

If you want to try troubleshooting (and you should), then my advise is to step through the code incrementally and check that the output makes sense for what that block of code is doing. Learning effective troubleshooting is a critical skill for anything technical and will apply to programming but also any kind of problem solving. So take some time to figure out how to diagnose your code and find the source of the problem, since as far as learning goes, it will a skill that you will continue to use for the rest of your life (even outside of coding).

1 Like

Sure. Will do. Thanks