I was trying to debug a segmentation model by adding hooks to its hidden layers to track activations.
However, because Yolo models always pass true to the key word “fuse” in the autobackend.py
the hooks end up being removed upon inference.
I have created a pull request with a short fix that involves adding a new key word argument when creating a yolo model and a field called self.fuse_layers which controls whether or not batch normalization will be fused with adjacent convolutional 2d blocks. I do not know if pull requests are reviewed by ultralytics team very often, so I just wanted to make a post here asking if this might be a reasonable addition to the library.
There are folks reviewing the PRs. I don’t know much about the section of code you’re proposing to change (meaning it’s my ignorance), but it sounds like this might be a change that has a very limited use case. I can tell you from experience that usually those aren’t accepted, just because it’s difficult to justify a change that would not be beneficial to many other users that we’re aware of. If it’s a reasonably small fix and doesn’t have larger implications, then it’s much more likely to be accepted.
The changes are minimal, and local to Yolo. It allows users to choose to fuse modules during inference, which was not available before. This allows debugging of activations. Which has many uses from interpretability to collecting data in general. It is a simple change, but it allows convenience. So far all tests have passed.
I do not think so. The call of the Yolo model calls/creates a new predictor during inference. I believe this is done every time. It automatically passes true when setup model is called to auto backend.
Excellent, glad to hear you found a working solution! We appreciate you taking the time to investigate this and for your initiative in creating a pull request. The community’s engagement is what makes this project thrive.
Please don’t hesitate to reach out if anything else comes up. Happy coding