Guidelines
Post post the output from running yolo checks
in the CLI OR write your computer specs including:
Operating System
CPU
RAM
GPU (make/model/vRAM)
Python version
PyTorch version
Ultralytics version
and then share the performance results from your PC running the following CLI command:
yolo benchmark model=yolov8n.pt \
data='coco128.yaml' \
imgsz=640 \
half=False \
device=0
1 Like
YOLO Checks
Ultralytics YOLOv8.2.48 π Python-3.10.12
torch-2.2.0+cu121
CUDA:0 (NVIDIA GeForce RTX 2060, 5924MiB)
Setup complete β
(12 CPUs, 15.6 GB RAM, 81.5/101.0 GB disk)
OS POP_OS!
Environment Linux
Python 3.10.12
Install git
RAM 15.56 GB
CPU AMD Ryzen 5 1600 Six-Core Processor
CUDA 12.1
matplotlib β
3.8.1>=3.3.0
opencv-python β
4.8.1.78>=4.6.0
pillow β
10.1.0>=7.1.2
pyyaml β
6.0.1>=5.3.1
requests β
2.31.0>=2.23.0
scipy β
1.11.3>=1.4.1
torch β
2.2.0>=1.8.0
torchvision β
0.17.0>=0.9.0
tqdm β
4.66.1>=4.64.0
psutil β
5.9.6
py-cpuinfo β
9.0.0
thop β
0.1.1-2209072238>=0.1.1
pandas β
2.1.3>=1.1.4
seaborn β
0.13.0>=0.11.0
Benchmark Results
Benchmarks complete for yolov8n.pt on coco128.yaml at imgsz=640 (387.02s)
Format Statusβ Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS
0 PyTorch β
6.2 0.4478 17.69 56.52
1 TorchScript β
12.4 0.4524 6.35 157.47
2 ONNX β
12.2 0.4524 72.50 13.79
3 OpenVINO β 0.0 NaN NaN NaN
4 TensorRT β
19.3 0.4524 3.51 285.28
5 CoreML β 0.0 NaN NaN NaN
6 TensorFlow SavedModel β
30.6 0.4524 71.04 14.08
7 TensorFlow GraphDef β
12.3 0.4524 72.12 13.87
8 TensorFlow Lite β 0.0 NaN NaN NaN
9 TensorFlow Edge TPU β 0.0 NaN NaN NaN
10 TensorFlow.js β 0.0 NaN NaN NaN
11 PaddlePaddle β
24.4 0.4524 326.35 3.06
12 NCNN β
12.2 0.4524 73.36 13.63
1 Like
Awesome, thanks for sharing! You can see results from the last 24 hours here in our daily YOLO benchmarks:
2 Likes
Hnnng
4
Yolo Checks
Ultralytics YOLOv8.2.69 π Python-3.11.9 torch-2.2.2+cu121 CUDA:0 (NVIDIA GeForce RTX 4090, 24564MiB)
Setup complete β
(384 CPUs, 511.7 GB RAM, 3038.9/3725.2 GB disk)
OS Windows-10-10.0.22631-SP0
Environment Windows
Python 3.11.9
Install pip
RAM 511.71 GB
CPU AMD EPYC 9654 96-Core Processor
CUDA 12.1
numpy β
1.26.3<2.0.0,>=1.23.0
matplotlib β
3.9.1>=3.3.0
opencv-python β
4.10.0.84>=4.6.0
pillow β
10.2.0>=7.1.2
pyyaml β
6.0.1>=5.3.1
requests β
2.32.3>=2.23.0
scipy β
1.14.0>=1.4.1
torch β
2.2.2+cu121>=1.8.0
torchvision β
0.17.2+cu121>=0.9.0
tqdm β
4.66.4>=4.64.0
psutil β
6.0.0
py-cpuinfo β
9.0.0
pandas β
2.2.2>=1.1.4
seaborn β
0.13.2>=0.11.0
ultralytics-thop β
2.0.0>=2.0.0
Benchmark
Benchmarks complete for yolov8n.pt on coco128.yaml at imgsz=640 (334.71s)
Format Statusβ Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS
0 PyTorch β
6.2 0.4472 19.56 51.13
1 TorchScript β
12.4 0.4520 4.71 212.35
2 ONNX β
12.2 0.4522 8.47 118.00
3 OpenVINO β 0.0 NaN NaN NaN
4 TensorRT β
17.3 0.4512 3.15 317.07
5 CoreML β 0.0 NaN NaN NaN
6 TensorFlow SavedModel β
30.6 0.4524 30.13 33.18
7 TensorFlow GraphDef β
12.3 0.4524 31.88 31.37
8 TensorFlow Lite β 0.0 NaN NaN NaN
9 TensorFlow Edge TPU β 0.0 NaN NaN NaN
10 TensorFlow.js β 0.0 NaN NaN NaN
11 PaddlePaddle β
24.4 0.4524 287.29 3.48
12 NCNN β
12.2 0.4524 55.07 18.16
1 Like
YOLO Checks
Ultralytics YOLOv8.2.77 π Python-3.10.14 torch-2.3.1 CUDA:0 (Tesla T4, 14916MiB)
Setup complete β
(64 CPUs, 433.0 GB RAM, 115.4/992.2 GB disk)
OS Linux-5.15.0-1017-azure-x86_64-with-glibc2.35
Environment Docker
Python 3.10.14
Install git
RAM 433.01 GB
CPU AMD EPYC 7V12 64-Core Processor
CUDA 12.1
numpy β
1.23.5<2.0.0,>=1.23.0
matplotlib β
3.9.2>=3.3.0
opencv-python β
4.10.0.84>=4.6.0
pillow β
10.3.0>=7.1.2
pyyaml β
6.0.1>=5.3.1
requests β
2.32.2>=2.23.0
scipy β
1.14.0>=1.4.1
torch β
2.3.1>=1.8.0
torchvision β
0.18.1>=0.9.0
tqdm β
4.66.4>=4.64.0
psutil β
5.9.0
py-cpuinfo β
9.0.0
pandas β
2.2.2>=1.1.4
seaborn β
0.13.2>=0.11.0
ultralytics-thop β
2.0.0>=2.0.0
CPU (AMD EPYC 7V12) Benchmark
yolo benchmark model=yolov8n.pt data="coco128.yaml" imgsz=640 half=False device="cpu"
Benchmarks complete for yolov8n.pt on coco128.yaml at imgsz=640 (292.30s)
Format Statusβ Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS
0 PyTorch β
6.2 0.4478 45.47 21.99
1 TorchScript β
12.4 0.4524 47.08 21.24
2 ONNX β
12.2 0.4524 76.91 13.00
3 OpenVINO β
12.3 0.4524 31.68 31.57
4 TensorRT β 0.0 NaN NaN NaN
5 CoreML β 6.2 NaN NaN NaN
6 TensorFlow SavedModel β
30.6 0.4524 63.86 15.66
7 TensorFlow GraphDef β
12.3 0.4524 62.04 16.12
8 TensorFlow Lite β
12.3 0.4524 116.79 8.56
9 TensorFlow Edge TPU β 3.9 NaN NaN NaN
10 TensorFlow.js β 12.3 NaN NaN NaN
11 PaddlePaddle β
24.4 0.4524 147.00 6.80
12 NCNN β
12.2 0.4524 125.02 8.00
GPU (Tesla T4) Benchmark
Benchmarks complete for yolov8n.pt on coco128.yaml at imgsz=640 (384.53s)
Format Statusβ Size (MB) metrics/mAP50-95(B) Inference time (ms/im) FPS
0 PyTorch β
6.2 0.4478 13.54 73.87
1 TorchScript β
12.4 0.4524 4.19 238.70
2 ONNX β
12.2 0.4524 112.69 8.87
3 OpenVINO β 0.0 NaN NaN NaN
4 TensorRT β
17.4 0.4523 2.86 349.36
5 CoreML β 0.0 NaN NaN NaN
6 TensorFlow SavedModel β
30.6 0.4524 65.01 15.38
7 TensorFlow GraphDef β
12.3 0.4524 64.78 15.44
8 TensorFlow Lite β 0.0 NaN NaN NaN
9 TensorFlow Edge TPU β 0.0 NaN NaN NaN
10 TensorFlow.js β 0.0 NaN NaN NaN
11 PaddlePaddle β
24.4 0.4524 278.00 3.60
12 NCNN β
12.2 0.4524 91.11 10.98