Yolov8 GPU Box(P R mAP50 mAP50-95) results = 0

I have a problem with my GPU setup when training custom data with yolov8n.pt (or any other model). This problem doesn’t happen when I use CPU, but since the CPU is around 10x slower, I wanted to try that with GPU.

this is the CLI one-liner: yolo task=detect mode=train epochs=150 data=custom.yaml model=yolov8s.pt imgsz=640 batch=8 name=yolov8s_small_gpu patience=80

this is my NVIDIA GPU: nvidia geforce gtx 1650 ti

Cuda version: 11.0
CUDNN version: 8.9

pytorch is compatible because in says torch.cuda.is_available() = True

this is my nvidia-smi result:

±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.41 Driver Version: 531.41 CUDA Version: 12.1 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1650 Ti WDDM | 00000000:01:00.0 Off | N/A |
| N/A 85C P0 45W / N/A| 3934MiB / 4096MiB | 95% Default |
| | | N/A |
±----------------------------------------±---------------------±---------------------+

±--------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 19212 C …Programs\Python\Python39\python.exe N/A |
±--------------------------------------------------------------------------------------+

this is my yolo training in progress result:

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
  6/150      3.62G      1.159     0.9253      1.344         17        640: 100%|██████████| 173/173 [01:22<00:00,
             Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 22/22 [00:12
               all        347        386          0          0          0          0

what could be the problem now? should i just let it run?

I am having the same issue. Is there a solution?

Hi! We are moving the Ultralytics community to Discord. To receive support for your questions, problems, issues, etc. please join us on the new server: Ultralytics