I’m using a YoloV8 for detection of jellyfish from a live video feed. I’m recording at 30FPS, and I get each frame, then track the jellyfish, calculate the error, and move motors to track. Everything works great, but the inferencing time slows after long use.
My computer has a 3090, which I’m doing model.to(“cuda:0”) to use, it appears to be using when checking task manager performance.
Initially, I get print outs like this, everything working great:
0: 1024x1024 1 Jelly-Fish, 14.0ms
Speed: 5.7ms preprocess, 14.0ms inference, 1.3ms postprocess per image at shape (1, 3, 1024, 1024)
After 2 hrs or so I get:
0: 1024x1024 1 Jelly-Fish, 418.8ms
Speed: 22.9ms preprocess, 418.8ms inference, 41.7ms postprocess per image at shape (1, 3, 1024, 1024)
Code below
with torch.no_grad():
results = modelJF.predict(frame_resized, imgsz=1024, conf=0.25,iou=0.7,half=True,device=“cuda:0”, verbose=True)
I’ve tried
torch.cuda.empty_cache()
torch.cuda.synchronize()
or using tensors to try and use the mode without .predict, but none of that seems to be helping.