When fine-tuning YOLOE (either full fine-tuning or linear probing), does it initialize with it’s already existing embedding to detection knowledge of the
for example, if I am fine-tuning a model to detect “person” and “flame” does it start with its existing knowledge of those classes
In essence, I’m trying to understand if the model starts with a “head start” on the classes it already knows.
simple code example:
from ultralytics import YOLOE
from ultralytics.models.yolo.yoloe import YOLOEPESegTrainer
model = YOLOE("yoloe-11s-seg.pt")
results = model.train(
data="flame_and_person.yaml", epochs=100, trainer=YOLOEPESegTrainer,
)
If it is the case that fine-tuning on YOLOE leverages prior knowledge, is there a downside to fine-tuning YOLOE11 instead of YOLO11 considering that on exported inference the runtime performance should be identical
Note: I would assume “Catastrophic Forgetting” for all other classes, similarly to closed-set object detector such as YOLO11
Yes, the embeddings are pre-calculated for the classes in your dataset and used as initialization to get a head start.
There are no downsides other than slower training compared to regular YOLO. YOLOE probably works better for fine-tuning because it has undergone pretraining on a much larger dataset.