Hello, I am working on deploying a segmentation model on my Raspberry Pi 5, but I’ve encountered a performance issue. Surprisingly, the model performs better when converted to the NCNN format compared to using the Google Coral TPU.
With the NCNN format, the model processes frames in about 30–40 ms, whereas with the TPU, it takes 80–90 ms per frame. This result is unexpected, as I anticipated the TPU to deliver better performance.
I’m unsure if I’ve made a mistake in my setup or configuration. Any advice or insights would be greatly appreciated.
Can you try exporting with imgsz=512. It should be able to give you better performance results as with a higher imgsz some ops don’t get mapped to the tpu. You can also look at the benchmarks here
I already tried, but the performance remained the same. It’s possible that the Raspberry Pi is running in low-power mode, which could explain the poor performance, as it restricts power usage for peripherals.