Hello everyone, I just built a new computer with two brand new GPUs (RTX 4060, 8GB each), I used to train models in my laptop which had the laptop version of the 4060. Whenever I run the models in vscode it tries to allocate the memory and gives me the error.
Only parameter different from me training the models in the laptop is the batch size. I have tried reducing the batch size and it runs the model until it reaches a GPU memory usage of around 6.1 Gb.
Even when I use batch factors the model gives me the error not being able to allocate GPU memory (which does not make much sense to me as I am already reserving a percentage of the GPU)
I have made sure that the models run on the GPUs and not the CPU, tried using both GPUs and it was still super slow while using only 5 or 6 GB of memory, and when changing between GPUs it runs out of memory still.
Is this because of my .config file? how can I solve this if i want to use the max out of my GPUs?