Low confidence scores when running YOLOv11n TFLite model with Android GPU delegate

I am integrating a YOLOv11n object detection model (exported to TFLite float16) into my React Native app using react-native-fast-tflite. While the model loads successfully, the prediction results returned by runSync() show extremely low confidence scores (nearly zero).

Model Details:
Base Model: YOLOv11n
Format: TFLite (float16) (using int8 has same result)
Input shape: [1, 320, 320, 3]
Output shape: [1, 8, 2100]

const model = useTensorflowModel(
    require('@/assets/best_float16.tflite'),
    'android-gpu'
);

runAtTargetFps(8, () => {
            'worklet'

            const resized = resize(frame, {
                dataType: 'float32',
                pixelFormat: 'rgb',
                scale: {width: 320, height: 320},
            });

            const outputs = actualModel.runSync([resized])
            const detections = outputs[0] // length = 8 * 2100

            for (let i = 0; i < 2100; i++) {
                for (let c = 0; c < 4; c++) {
                    const rawScore = detections[(4 + c) * 2100 + i]
                    console.log("Raw score: ", rawScore);
                }
                console.log("End of anchor ", i)
            }

The log is:

LOG Raw score: 0.000007644203833478969
LOG Raw score: 2.3472063048757263e-7
LOG Raw score: 3.0208294532485525e-8
LOG Raw score: 0.0000011134524129374768

If you get low scores, then that means the preprocessing is very likely wrong.

The resize method is incorrect. It should be letterbox. And you’re not normalizing the image.