I have question on my yolov8 when i use it

(ultralytics) PS F:\yolov8\ultralytics> yolo task=detect mode=predict model=yolov8n.pt conf=0.25 source=‘ultralytics/assets/bus.jpg’
Ultralytics YOLOv8.0.139 Python-3.9.17 torch-2.0.1 CUDA:0 (NVIDIA GeForce GTX 1660 Ti with Max-Q Design, 6144MiB)
YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients

Traceback (most recent call last):
File “D:\Anaconda3\envs\ultralytics\lib\runpy.py”, line 197, in _run_module_as_main
return run_code(code, main_globals, None,
File “D:\Anaconda3\envs\ultralytics\lib\runpy.py”, line 87, in run_code
exec(code, run_globals)
File "D:\Anaconda3\envs\ultralytics\Scripts\yolo.exe_main
.py", line 7, in
File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\cfg_init
.py", line 410, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\torch\utils_contextlib.py”, line 115, in decorate_context
return func(*args, **kwargs)
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\engine\model.py”, line 254, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\engine\predictor.py”, line 200, in predict_cli
for _ in gen: # running CLI inference without accumulating any outputs (do not modify)
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\torch\utils_contextlib.py”, line 35, in generator_context
response = gen.send(None)
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\engine\predictor.py”, line 255, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\models\yolo\detect\predict.py”, line 14, in postprocess
preds = ops.non_max_suppression(preds,
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\utils\ops.py”, line 261, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\torchvision\ops\boxes.py”, line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File “D:\Anaconda3\envs\ultralytics\lib\site-packages\torch_ops.py”, line 502, in call
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exi
st for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using
PyTorch on mobile, please visit Internal Login for possible resolutions. ‘torchvision::nms’ is only available for these backends
: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplace
OrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, Au
tocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLa
yerFrontMode, PythonDispatcher].

CPU: registered at C:\b\abs_61prww4bv9\croot\torchvision_1689079992237\work\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\b\abs_61prww4bv9\croot\torchvision_1689079992237\work\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124
[kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fall
back]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback
]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallba
ck]
AutogradOther: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback
]
AutogradCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallba
ck]
FuncTorchVmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend
fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallbac
k]
PythonDispatcher: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

i still want use GPU to work please help me thank you!