Ultralytics Platform Configuration and HEF Model Export

Hi All,

I am very interested in using the cloud-based Ultralytics platform, where I can upload my dataset and use cloud-based resources to train a model. However, I have a few questions:

  1. Are there any video examples that walk through the full process?

  2. I would like to start with a standard model and then add additional datasets to improve and expand what the original model can detect. Is this the correct approach?

  3. We plan to run this model on Hailo-8 modules with a Raspberry Pi. How can we export the model in the native HEF format?

Thank you in advance for your support.

  1. Since things are still changing regularly on Platform, there is no video walkthru yet.
  2. Training models is not additive, when you start with a pretrained model like COCO (default), when you train on a new dataset, the information from any prior training is not retained. If you want to “add” to the original COCO dataset, you can do that and then train the model (completely). This is always how it works.
  3. There is no native support right now for HEF format export on Ultralytics Platform or using the ultralytics Python library. All supported export formats can be found in the documentation.

Thanks

I believe there is a way to convert ONNX to HEF, but I would like some advice on how resource-intensive the process is.

ONNX exports are reasonably straightforward. The conversation process is something you’d have to investigate with Hailo directly.