I have trained a YOLOv8 model but modified its backbone by adding a custom component (APConv). I want to deploy this model using Flask so it can receive images and return object counts. When I try to load the model with YOLO('best.pt') in Flask, I encounter module import errors for the custom layers, even though I have added the .py file for APConv.
My Flask code includes routes for uploading images and running inference, but the model fails to load due to missing custom modules. I need guidance on the best way to load or package the model for deployment so that my custom layers are recognized and inference works without errors.
You will have to fully clone the modified Ultralytics repository code where you made the modifications to the layers. If you try to run using the standard library installed from PyPi, it won’t work. If you haven’t yet, it would likely make sense to containerize your application and during that process you can copy the Ultralytics repo code you modified, then install the package using the local directory instead of the PyPi package.
Thanks @BurhanQ for your response , it works now .
Also, I was wondering what to do so I can see mAP@30 for my model in addition to mAP50 and mAP50-95?
I added this to metrics.py
trial#1 def map30(self) -> float: return self.all_ap[:, 3].mean() and later tried
trial#2 def map30(self) -> float: return self.all_ap[:, 0].mean() along with
val_metrics = model.val( data="data.yaml", iou=0.3)
but the value of mAP@30 does not make sense. It is either less than or equal to mAP@50.
On AP@0.30: Ultralytics YOLO only reports AP at 0.50 and 0.50:0.95 by default. The iou argument in model.val(iou=0.3) controls NMS IoU, not the evaluation IoU. To get AP@0.30 you can either:
Modify the evaluator to include 0.30 in the IoU thresholds and re-run val (edit the IoU list in ultralytics/utils/metrics.py), then read it from results.box.all_ap.
Or export predictions and use COCOeval with a custom IoU, for COCO-format datasets:
# After: model.val(data="coco.yaml", save_json=True)
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import numpy as np
gt = COCO("instances_val2017.json")
dt = gt.loadRes("predictions.json")
e = COCOeval(gt, dt, "bbox"); e.params.iouThrs = np.array([0.30])
e.evaluate(); e.accumulate(); e.summarize()
For what is computed out of the box, see the metrics properties in the docs, for example the explanations of map, map50, and map75 in the metrics reference.