In our soccer iOS App, we need to draw the segmentation mask of goals onto the screen.
How can we achieve that? This is our code in Swift:
if let model = try? yolo8n_segment().model,
let nnmodel = try? VNCoreMLModel(for: model) {
let url = Bundle.main.url(forResource: "img0", withExtension: "jpg")!
let data = try? Data(contentsOf: url)
let request = VNCoreMLRequest(model: nnmodel)
let handler = VNSequenceRequestHandler()
try? handler.perform([request], onImageData: data!)
let results = request.results as? [VNCoreMLFeatureValueObservation]
}
Currently we are getting results of type VNCoreMLFeatureValueObservation, but they are “hidden” in a strange way like double nested arrays. What I’m trying to say is - we get the raw output. There are types provided in Apples Vision Framework like VNInstanceMaskObservation. But every instance segmentation packed in a Yolo model don’t let getting us VNInstanceMaskObservation types when requesting.
Is there an easy way to extract the mask of a yolo segmentation model in iOS?
Greetings,
Dom