When predicting, the Source parameter is multi-stream. How to obtain the corresponding stream unique identifier or other identifier from results?
Hello!
Great question! When working with multi-stream sources in Ultralytics YOLO, you can manage and identify each stream by using a .streams
text file. This file should contain one streaming URL per line, allowing you to run batched inference across multiple streams.
To obtain unique identifiers for each stream, you can modify your code to include a mapping between each stream URL and its corresponding identifier. Here’s a simple approach:
- Create a dictionary that maps each stream URL to a unique identifier.
- Use this dictionary to track results for each stream.
Here’s a quick example:
from ultralytics import YOLO
# Load a pretrained YOLO model
model = YOLO("yolo11n.pt")
# Define your streams and identifiers
streams = {
"rtsp://example.com/media1.mp4": "Stream1",
"rtsp://example.com/media2.mp4": "Stream2",
# Add more streams as needed
}
# Run inference on the streams
results = model(list(streams.keys()), stream=True)
# Process results and map them to identifiers
for result, stream_url in zip(results, streams.keys()):
stream_id = streams[stream_url]
print(f"Results for {stream_id}:")
result.show() # Display results
This way, you can easily associate each result with its corresponding stream identifier. For more details on using multi-stream sources, check out the Ultralytics YOLO documentation.
Feel free to ask if you have more questions!
Oh I see, so easy. Thank you very much
Traceback (most recent call last):
File “/data/workspace/Yolo/predict_stream.py”, line 30, in
for result, stream_url in zip(results, streams.keys()):
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/torch/utils/_contextlib.py”, line 35, in generator_context
response = gen.send(None)
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/ultralytics/engine/predictor.py”, line 226, in stream_inference
self.setup_source(source if source is not None else self.args.source)
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/ultralytics/engine/predictor.py”, line 198, in setup_source
self.dataset = load_inference_source(
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/ultralytics/data/build.py”, line 187, in load_inference_source
source, stream, screenshot, from_img, in_memory, tensor = check_source(source)
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/ultralytics/data/build.py”, line 162, in check_source
source = autocast_list(source) # convert all list elements to PIL or np arrays
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/ultralytics/data/loaders.py”, line 514, in autocast_list
files.append(Image.open(requests.get(im, stream=True).raw if str(im).startswith(“http”) else im))
File “/home/aiusers/anaconda3/envs/pytorch_2.0/lib/python3.9/site-packages/PIL/Image.py”, line 3247, in open
fp = builtins.open(filename, “rb”)
FileNotFoundError: [Errno 2] No such file or directory: ‘rtsp://xxx:xxx@1256@192.168.100.102:8554/test’
It determines that it is not a stream
The output results should be in the order of the streams. So the first stream would be in results[0]
, second stream would be in results[1]
and so on.
I know how to deal with it. Save the variable streams to the .streams file.
This is because the method of load_inference_source and the method of check_source failed to process list[str], str is stream url.
streams = {
'rtsp://192.168.100.102:8554/test': 'channel_1001',
'rtsp://192.168.100.103:8554/test': 'channel_1002',
}
with open('list.streams', 'w') as f:
for url in streams.keys():
f.write(url + '\n')
results = model('list.streams', stream=True, max_det=50, vid_stride=6, imgsz=640, device=3)
# Process results and map them to identifiers
for result, stream_url in zip(results, streams.keys()):
stream_id = streams[stream_url]
print(f"Results for {stream_id}:")