Quick Start with Raspberry Pi, Pi Camera, and Libcamera

Today many students and others want to get started quickly using a low cost and widely distributed Raspberry Pi with V1, V2, or HQ (V3) Raspberry Pi Cameras. During the past few months there was a transition with Bullseye from what is now termed “legacy camera drivers” to Libcamera and subsequent development of Picamera2, the libcamera-based replacement for Picamera which was a Python interface to the Raspberry Pi’s legacy camera stack.

I am sure Raspberry Pi users around the world would appreciate an update to your Quick Start - YOLOv5 Documentation
to enable a Quick Start with Raspberry Pi with V1 or V2 or HQ (V3) Pi Cameras operating on Bullseye with Libcamera and Picamera2.

Thanks for the feedback. We’ll definitely improve our docs and add more content related to edge in the future.

I am providing a Quick Start with Raspberry Pi, Pi Camera (or webcam), and Libcamera:
With the release of Raspberry Pi OS Bullseye, the default camera stack is now libcamera.

This guide explains how to deploy a trained YOLOv5 model on a Raspberry Pi 3 or Pi 4 running the 64 bit version of the operating system with the libcamera camera stack.

Install the Raspberry Pi Operating System

Prepare a micro SD card with the latest version of the Raspberry Pi Operating System (64-bit). You can use the Raspberry Pi Imager tool to get everything set up properly:

Install Necessary Packages

  1. Make sure the Raspberry Pi is up-to-date

sudo apt-get update

sudo apt-get upgrade -y

sudo apt-get autoremove -y

  1. Clone the YOLOv5 repository

cd ~

git clone GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

  1. Install the required dependencies

cd ~/yolov5

sudo pip3 install -r requirements.txt

  1. Install the correct version of PyTorch and Torchvision (at the time of writing, PyTorch 1.13.0 and Torchvision 0.14.0 were not supported)

sudo pip3 install torch==1.11.0

sudo pip3 install torchvision==0.12.0

Adjust detect.py

By default, detect.py doesn’t allow TCP streams to be used as a source and fails when used via SSH or the Raspberry Pi Command Line Interface. To fix this, make two minor modifications to detect.py:

  1. Open detect.py and find the ‘is_url’ line

cd ~/yolov5

sudo nano detect.py

CTRL + W → is_url → ENTER

  1. Add TCP streams as an accepted url-format

is_url = source.lower().startswith((‘rtsp://’, ‘rtmp://’, ‘tcp://’, ‘http://’, ‘https://’))

  1. Find the line that says ‘view_img = check_imshow(warn=True)’

CTRL + W → view_img = check_imshow(warn=True) → ENTER

  1. Comment out this line

*#*view_img = check_imshow(warn=True)

  1. Save the modifications and close detect.py

CTRL + O → ENTER → CTRL + X

Initiate a TCP stream with the Libcamera driver

libcamera-vid -n -t 0 --width 1280 --height 960 --framerate 1 --inline --listen -o tcp://127.0.0.1:8888

Perform YOLOv5 inference on the TCP stream

  1. Navigate into the yolov5 directory

cd ~/yolov5

  1. Run detect.py with the TCP stream as its source

python3 detect.py --source tcp://127.0.0.1:8888

  1. Note: by commenting out ‘view_img = check_imshow(warn=True)’ we made sure detect.py no longer display its source + predicted bounding boxes by default to enable inference via SSH or the Command Line Interface. To re-enable this feature, run detect.py with the --view-img argument:

python3 detect.py --source tcp://127.0.0.1:8888 --view-img