Connecting a live Camera RP Model

Hi

I am following the examples outlined in Quick Start Guide: Raspberry Pi with Ultralytics YOLO11

The example uses a JPG image, but how can I get the data feed from a camera that is already connected to my Raspberry Pi? I’ve read that you can’t do it directly and need to assign the camera an IP address, then have the model read the stream directly from that IP address.

Can someone advise me on how I can achieve this?

What camera is it?

It is the:

Waveshare IMX290-83 IR-Cut Camera Compatible with Raspberry Pi Board and Module Series Using a IMX290 Starlight Camera Sensor with 2MP onboard IR-CUT Switch the Modes Between Daytime and Nighttime: Amazon.co.uk: Computers & Accessories

This code seems to be for that camera. Once you get the frame here:

You can just pass it to model.predict():

results = model.predict(frame, show=True)

and it will show the predictions.

Thanks

But for some reason, i am not able to get my code working with the Camera without my Halo8L module, so I think I developed the Ultralytics model for the Hilo8 module instead of directly

Sounds like your model was compiled for the Hailo‑8L, so it won’t run with the standard Ultralytics runtime. To use the Pi camera directly (without the module), run a regular YOLO11 .pt or export to NCNN and feed frames from picamera2.

Quick check the camera first: rpicam-hello (5‑second preview). If that works, use picamera2 and pass frames to YOLO as in the Raspberry Pi camera guide:

import cv2
from picamera2 import Picamera2
from ultralytics import YOLO

picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 720)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()

model = YOLO("yolo11n.pt")  # use a standard .pt or your trained .pt

while True:
    frame = picam2.capture_array()
    res = model(frame)[0]
    cv2.imshow("YOLO", res.plot())
    if cv2.waitKey(1) == ord("q"):
        break
cv2.destroyAllWindows()

Alternatively, stream the camera and use the TCP URL:

rpicam-vid -n -t 0 --inline --listen -o tcp://127.0.0.1:8888
yolo predict model=yolo11n.pt source="tcp://127.0.0.1:8888"

For best Pi performance, export to NCNN and use the exported model:

pip install -U ultralytics
yolo export model=yolo11n.pt format=ncnn
yolo predict model=yolo11n_ncnn_model source="tcp://127.0.0.1:8888"

Step-by-step examples are in the Raspberry Pi camera section and the inference sources overview in the docs:

If it still fails, please share the exact error/traceback and whether you’re trying CPU/NCNN or the Hailo path.

@pderrenger Sorry for the late reply
When I call rpicam-vid -n -t 0 --inline --listen -o tcp://127.0.0.1:8888 It seems its only one frame I am able to view. I am doing a little using VLC.

Hello @dj3000,

I will further guide you in this issue. Could you please try this code snippet and let me know the results?

import cv2
from picamera2 import Picamera2

from ultralytics import YOLO

picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 720)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()

model = YOLO("yolo11n.pt")

while True:
    frame = picam2.capture_array()
    results = model(frame)
    annotated_frame = results[0].plot()
    cv2.imshow("Camera", annotated_frame)
    if cv2.waitKey(1) == ord("q"):
        break

cv2.destroyAllWindows()