Training an instance segmentation model WITHOUT cropping. [Yolov11]

Is it possible to modify ultralytic’s training script so that masks are not cropped when loss function is used to check mask accuracy for the model? As in, can we just compare the final mask result of the coefficients being added together directly to the ground truth mask? I noticed that outside of the cropping, the mask is picking up lots of other noise (segmenting things it should not) and relies heavily on the bounding box to obscure those areas. If a model was trained to entirely ignore anything but what the box would crop, it would lead to more accurate results I believe.

You can modify it and check if it improves the result. I assume it’s there to reduce GPU memory usage.