I’m using yolo11n-pose to identify keypoints in a video of a person’s face. It’s flagging keypoints for the nose, eyes, and ears, as shown in the image below:
To get at the keypoints in question, I indexed into results.keypoints.xy[0][0:3] and got the subset of results I was expecting, but I can’t figure out how to do the same for the object getting passed to the plotting utility.
There isn’t a direct way to accomplish this with the .plot() method. You could use a bit of a roundabout way manually
from copy import deepcopy
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator, colors
# Using all default config values
line_width = None
font = "Arial.ttf"
font_size = None
kpt_radius = 5
kpt_line = False
color_mode = "instance"
model = YOLO("yolo11n-pose.pt")
src = "" # update as needed
results = model.predict(src)
for result in results:
annotator = Annotator(
deepcopy(result.orig_img), # could use frame if iterating
line_width,
font_size,
font,
example=model.names,
)
for i, k in enumerate(reversed(results.keypoints.data)):
annotator.kpts(
k[:3], # might have to adjust as needed
results.orig_shape,
radius=kpt_radius,
kpt_line=kpt_line,
kpt_color=colors(i, True) if color_mode == "instance" else None,
)
annotator.show(result.path)