Yolov8 CUDA out of memory error

Hello everyone, I just built a new computer with two brand new GPUs (RTX 4060, 8GB each), I used to train models in my laptop which had the laptop version of the 4060. Whenever I run the models in vscode it tries to allocate the memory and gives me the error.

Only parameter different from me training the models in the laptop is the batch size. I have tried reducing the batch size and it runs the model until it reaches a GPU memory usage of around 6.1 Gb.

Even when I use batch factors the model gives me the error not being able to allocate GPU memory (which does not make much sense to me as I am already reserving a percentage of the GPU)

I have made sure that the models run on the GPUs and not the CPU, tried using both GPUs and it was still super slow while using only 5 or 6 GB of memory, and when changing between GPUs it runs out of memory still.

Is this because of my .config file? how can I solve this if i want to use the max out of my GPUs?

What’s the code and actual error?

1 Like

This is the error

Exception has occurred: OutOfMemoryError
CUDA out of memory. Tried to allocate 64.00 MiB. GPU 
  File "/home/Juan/Documents/training/test.py", line 5, in <module>
    model.train(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU

and the code is

from ultralytics import YOLO

model = YOLO("runs/detect/yolov8s_joint2ciudad/weights/best.pt")

model.train(
    data="ciudad/dataciudad.yaml",
    task='detect',
    epochs=1,
    warmup_epochs=0,
)

Can you provide the output after running this command in terminal: yolo checks?