Windows bring your own

I’m a newb. Like I’m going to google most of the words you use. You’ve been warned.

Fell into utltralytics hub from some random link. Previously I’ve been using MS Customvision AI, but the random hoops to upload datasets annoyed me out of all proportion. The accuracy(?) is better here too.

I’m pushing up datasets and training it with “bring your own” on a windows 10 computer (3080TI/2080Super) When I start the training there is about 2 hours of this repeating before it ever seems to get to the training.

Transferred 349/349 items from epoch-39.pt
AMP: checks passed
AutoBatch: Computing optimal batch size for --imgsz 640
AutoBatch: CUDA:0 (NVIDIA GeForce RTX 3080 Ti) 12.00G total, 0.16G reserved, 0.01G allocated, 11.82G free
      Params      GFLOPs  GPU_mem (GB)  forward (ms) backward (ms)                   input                  output
     1784212       4.284         0.365            29         13.33        (1, 3, 640, 640)                    list
     1784212       8.569         0.403         18.67         12.33        (2, 3, 640, 640)                    list
     1784212       17.14         0.646         17.67            18        (4, 3, 640, 640)                    list
     1784212       34.28         0.956         83.33         15.33        (8, 3, 640, 640)                    list
     1784212       68.55         1.711         303.3         80.67       (16, 3, 640, 640)                    list
AutoBatch: Using batch-size 101 for CUDA:0 9.58G/12.00G (80%)
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0007890625), 60 bias
Resuming training from epoch-39.pt from epoch 40 to 100 total epochs
WARNING  DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.
See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.
train: Scanning C:\code\textprepocesstestpython\datasets\WarzoneSummarySymbols (7)\WarzoneSummarySymbols\labels\train.c
train: 7.7GB RAM required, 1.0/63.9GB available, not caching images
Ultralytics:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

Any info on if this is the correct behavior or if there is away to speed it up would be appreciated.