SAM2 Announced!

In case you missed it (ICYMI) Meta announced SAM2 yesterday! :rocket:

  • SAM 2 extends its predecessor by incorporating video segmentation capabilities, using a transformer architecture with streaming memory for real-time processing, and is trained on the new SA-V dataset. Paper, Project, Demo, Dataset, Blog

  • The development of SAM 2 was driven by the need for a foundation model capable of prompt-based visual segmentation across a broad spectrum of tasks and visual domains, enabled by a model in the loop data engine that enhances both model and data through user input.

Well you know :ultralytics: Ultralytics is going to be working on integrating SAM2 into our python library! Check out the new Docs page! Keep an eye out for more updates.

What are you most excited regarding SAM2?

1 Like

Dang! The dev-Team is so fast :rocket: `ultralytics 8.2.70` Segment Anything Model 2 (SAM 2) by Laughing-q Β· Pull Request #14813 Β· ultralytics/ultralytics Β· GitHub

Keep an eye out for more developments from the Ultralytics Team on SAM2!

1 Like

Quite the impressive result!

Code example

from ultralytics import SAM, ASSETS

model = SAM("sam2_s.pt")
result = model.predict(ASSETS / "bus.jpg")
result[0].show()

A few SAM2 snippets added to the :ultralytics: Ultralytics Snippets Extension for :vscode: VS Code as well!

1 Like

Wow, cool example!

1 Like

Crazy slow though compared to the YOLO models, inference takes 60000 ms on my M2 Macbook vs 85 ms for YOLOv8n-seg, so maybe 700 times slower.

image 1/1 /Users/glennjocher/PycharmProjects/ultralytics/ultralytics/assets/bus.jpg: 1024x1024 1 0, 1 1, 1 2, 1 3, 1 4, 1 5, 1 6, 1 7, 1 8, 1 9, 1 10, 1 11, 1 12, 1 13, 1 14, 1 15, 1 16, 1 17, 1 18, 60156.0ms
Speed: 10.7ms preprocess, 60156.0ms inference, 20.4ms postprocess per image at shape (1, 3, 1024, 1024)
2 Likes