Do you have any suggestions?
Does the camera support MJPG stream?
Sadly only RTSP and single image download. Having the RTSP stream downloading, would have to skip some 299fps to grab one, seems wasteful.
Does this format work with cv2.VideoCapture()
?
http://<username>:<password>@<address>:<httpport>/Streaming/Channels/1/picture
It doesnât, numpy complains it isnât an array.
I also removed Albumentations 2.0.8, as after installing it, some of cv2âs functions were broken. Wonder if that was causing the random lockups, because it wasnât this way beforehand. Iâll do more runs to test.
Did you do something like this?
cap = cv2.VideoCapture("http://<username>:<password>@<address>:<httpport>/Streaming/Channels/1/picture")
frame = cap.read()[1]
I did not, thanks, that seems to actually work with it. Now to see if it freezes in 4 hours, as if it doesnât, then it is the hikvisionapi causing it. I need to resolve the lockup. The first core pinned to 100% and the second was at 23%, the others zero. 24% ram usage, all the cores were around 60C, so it basically is hitting a bug somewhere.
Hello!
It sounds like youâre encountering a system freeze when processing the detection results. The line detection_results = yolo_model(...)
actually executes the model and stores the results. The subsequent loop for result in detection_results:
is iterating over these already computed results.
One thing I notice is that youâre loading the model yolo_model = YOLO("Weights/yolo11s.pt")
inside your while(i):
loop. This means the model is reloaded with every frame, which is inefficient and could potentially lead to memory issues over extended periods. Try moving the model loading outside the loop, so itâs initialized only once.
# Initialize YOLO model once
yolo_model = YOLO("Weights/yolo11s.pt")
torch.backends.nnpack.enabled = False # If needed, place near model init
classids = (2, 5, 7) # Car, Bus, Truck
cam = Client(url, usernm, passwd)
while(i):
# ... (frame capture logic) ...
if good:
# ... (masking and scaling) ...
# Perform object detection
# Note: stream=True returns a generator. If you need all results before iterating for debugging,
# you might process it into a list, but be mindful of memory for many detections.
# For now, let's assume stream=True is for efficient processing.
detection_results_generator = yolo_model(scaled_frame, conf=0.25, agnostic_nms=True, iou=0.7, imgsz=scaled_down, max_det=20, classes=classids, stream=True, save=False)
print("Processing results...")
vcount = 0
try:
for results_object in detection_results_generator: # results_object is a Results object
# You can print results_object directly to see its structure
# print(results_object)
for box in results_object.boxes:
# ... (your existing box processing logic) ...
class_name = class_labels[int(box.cls[0])]
confidence = box.conf[0]
print(f'{class_name}:{confidence:.2f}')
if class_name in ["car", "truck", "bus"]: # Or use classids directly if they map correctly
vcount += 1
except Exception as e:
print(f"Error during results iteration: {e}")
# Potentially add more robust error handling or logging here
# ... (rest of your code) ...
The freeze happening after 1-2 hours could point to a memory leak or system instability accumulating over time. Moving the model load out is a good first step.
Regarding iou
and distant/dark vehicle detection: iou
primarily helps with non-maximum suppression (NMS) for overlapping boxes of the same or different (if agnostic_nms=True
) classes. Inconsistent detection of distant or dark vehicles is more often related to the modelâs confidence in those detections (influenced by image quality, lighting, object size, and training data) rather than the iou
setting for NMS. You might experiment with the conf
threshold or look into image preprocessing if those detections are critical.
Let me know if moving the model initialization helps with the stability!
Sorry, the model load is actually outside of the loop now, because I moved it at one point and undid a few edits, guess I went too far and undid it moving, as I wanted to move as much out of the loop as I could. Albumentations is what I put in and after that, functions like cv2.waitKey was ânot a functionâ, so Iâm thinking there was an issue that damaged OpenCV, so Iâm starting fresh with another machine and a clean linux install to see if thatâll get any farther, plus it is a newer CPU (not much newer), so itâll help with some of the requirements, but I may need proper instructions to go CPU only if it doesnât like it.
I set the iou to 0.7 and the conf is at 0.19 (because the vehicles were showing up at that level more).
As for the farther distance detection, the vehicles are going out of the center of the lense, so the camera warps the horizon to a degree that the vehicles look about 15 to 20 degrees tilted beyond what they were, wondering if that has a factor in the detection of them.
When I get the machine all setup for the testing again, Iâll know if it lasts longer than a few hours. If it does, then that Albumentations was the cause.
Okay, I made some headway, got the machine setup, new linux, new install, ran it, no complaints (so far).
Though the parked vehicles area, is seemly annoying and rather odd. If a vehicle in the nearest lane comes horizontally across from the parked vehicles, the parked ones are also counted, but only if nothing is in front of them.
Also there is that reaaally tiny âcarâ in total black, not sure how/why it saw that up there. Iâm wondering, the regions, how much tolerance is there going outside them to âsee an objectâ? As you can see the laneway is on an angle (as are the cars due to the camera).
I am also wondering, if anyone has any insight as to âwhere/howâ to put the camera in the future, is higher and aimed down at the cars best, as I honestly have no idea what would improve this (aside from avoiding parked cars on a weird angle).
Hello! Thanks for the detailed report. The system freeze youâre experiencing after a couple of hours is likely related to using stream=True
in your yolo_model()
call.
The stream=True
argument is designed for memory-efficient processing of long videos or numerous images by creating a generator. For your use case of processing a single frame at a time within a while
loop, this is not necessary and could be causing a resource leak over time, leading to the freeze. You can find more details on this in our configuration documentation on prediction settings.
If you remove stream=True
, the yolo_model()
call will complete the inference and return a list of results before your loop begins. This directly addresses your question about separating the calculation from the iteration and should resolve the freezing issue.
Your code would then look something like this:
# stream=True is removed
results_list = yolo_model(scaled_frame, conf=0.25, agnostic_nms=True, iou=0.7, imgsz=scaled_down, max_det=20, classes=classids)
# results_list is a list; for a single image, access the first element
if results_list:
result = results_list[0]
vcount = 0
for box in result.boxes:
# Your existing logic to process boxes
class_name = class_labels[int(box.cls[0])]
confidence = box.conf[0]
print(f'{class_name}:{confidence:.2f}')
# The model call already filters by class, so this check is redundant
# if class_name in ["car", "truck", "bus"]:
vcount += 1
This change should make your application more stable for long-running operations. Let us know if that helps
Check out this guide on object counting in regions, and this FAQ section on using custom region shapes. With the custom regions, you can draw a region on the image where you want objects to be counted, and anything outside of that region wouldnât be counted. Even if your goal wasnât counting objects in that region, this would be a good reference to start from.