mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-28 09:44:22 +08:00
Update SAM docs page (#3672)
This commit is contained in:
parent
1ae7f84394
commit
b239246452
@ -30,13 +30,30 @@ For an in-depth look at the Segment Anything Model and the SA-1B dataset, please
|
|||||||
|
|
||||||
The Segment Anything Model can be employed for a multitude of downstream tasks that go beyond its training data. This includes edge detection, object proposal generation, instance segmentation, and preliminary text-to-mask prediction. With prompt engineering, SAM can swiftly adapt to new tasks and data distributions in a zero-shot manner, establishing it as a versatile and potent tool for all your image segmentation needs.
|
The Segment Anything Model can be employed for a multitude of downstream tasks that go beyond its training data. This includes edge detection, object proposal generation, instance segmentation, and preliminary text-to-mask prediction. With prompt engineering, SAM can swiftly adapt to new tasks and data distributions in a zero-shot manner, establishing it as a versatile and potent tool for all your image segmentation needs.
|
||||||
|
|
||||||
```python
|
!!! example "SAM prediction example"
|
||||||
from ultralytics import SAM
|
|
||||||
|
|
||||||
model = SAM('sam_b.pt')
|
Device is determined automatically. If a GPU is available then it will be used, otherwise inference will run on CPU.
|
||||||
model.info() # display model information
|
|
||||||
model.predict('path/to/image.jpg') # predict
|
=== "Python"
|
||||||
```
|
|
||||||
|
```python
|
||||||
|
from ultralytics import SAM
|
||||||
|
|
||||||
|
# Load a model
|
||||||
|
model = SAM('sam_b.pt')
|
||||||
|
|
||||||
|
# Display model information (optional)
|
||||||
|
model.info()
|
||||||
|
|
||||||
|
# Run inference with the model
|
||||||
|
model('path/to/image.jpg')
|
||||||
|
```
|
||||||
|
=== "CLI"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run inference with a SAM model
|
||||||
|
yolo predict model=sam_b.pt source=path/to/image.jpg
|
||||||
|
```
|
||||||
|
|
||||||
## Available Models and Supported Tasks
|
## Available Models and Supported Tasks
|
||||||
|
|
||||||
@ -53,6 +70,33 @@ model.predict('path/to/image.jpg') # predict
|
|||||||
| Validation | :x: |
|
| Validation | :x: |
|
||||||
| Training | :x: |
|
| Training | :x: |
|
||||||
|
|
||||||
|
## SAM comparison vs YOLOv8
|
||||||
|
|
||||||
|
Here we compare Meta's smallest SAM model, SAM-b, with Ultralytics smallest segmentation model, [YOLOv8n-seg](../tasks/segment):
|
||||||
|
|
||||||
|
| Model | Size | Parameters | Speed (CPU) |
|
||||||
|
|---------------------------------------------|----------------------------|------------------------|-------------------------|
|
||||||
|
| Meta's SAM-b | 358 MB | 94.7 M | 51096 ms |
|
||||||
|
| Ultralytics [YOLOv8n-seg](../tasks/segment) | **6.7 MB** (53.4x smaller) | **3.4 M** (27.9x less) | **59 ms** (866x faster) |
|
||||||
|
|
||||||
|
This comparison shows the order-of-magnitude differences in the model sizes and speeds. Whereas SAM presents unique capabilities for automatic segmenting, it is not a direct competitor to YOLOv8 segment models, which are smaller, faster and more efficient since they are dedicated to more targeted use cases.
|
||||||
|
|
||||||
|
To reproduce this test:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ultralytics import SAM, YOLO
|
||||||
|
|
||||||
|
# Profile SAM-b
|
||||||
|
model = SAM('sam_b.pt')
|
||||||
|
model.info()
|
||||||
|
model('ultralytics/assets')
|
||||||
|
|
||||||
|
# Profile YOLOv8n-seg
|
||||||
|
model = YOLO('yolov8n-seg.pt')
|
||||||
|
model.info()
|
||||||
|
model('ultralytics/assets')
|
||||||
|
```
|
||||||
|
|
||||||
## Auto-Annotation: A Quick Path to Segmentation Datasets
|
## Auto-Annotation: A Quick Path to Segmentation Datasets
|
||||||
|
|
||||||
Auto-annotation is a key feature of SAM, allowing users to generate a [segmentation dataset](https://docs.ultralytics.com/datasets/segment) using a pre-trained detection model. This feature enables rapid and accurate annotation of a large number of images, bypassing the need for time-consuming manual labeling.
|
Auto-annotation is a key feature of SAM, allowing users to generate a [segmentation dataset](https://docs.ultralytics.com/datasets/segment) using a pre-trained detection model. This feature enables rapid and accurate annotation of a large number of images, bypassing the need for time-consuming manual labeling.
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
description: Learn about Ultralytics YOLO's MaskDecoder, Transformer architecture, MLP, mask prediction, and quality prediction.
|
||||||
|
keywords: Ultralytics YOLO, MaskDecoder, Transformer architecture, mask prediction, image embeddings, prompt embeddings, multi-mask output, MLP, mask quality prediction
|
||||||
|
---
|
||||||
|
|
||||||
## MaskDecoder
|
## MaskDecoder
|
||||||
---
|
---
|
||||||
### ::: ultralytics.vit.sam.modules.decoders.MaskDecoder
|
### ::: ultralytics.vit.sam.modules.decoders.MaskDecoder
|
||||||
|
@ -23,6 +23,11 @@ keywords: Ultralytics YOLO, downloads, trained models, datasets, weights, deep l
|
|||||||
### ::: ultralytics.yolo.utils.downloads.safe_download
|
### ::: ultralytics.yolo.utils.downloads.safe_download
|
||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
|
## get_github_assets
|
||||||
|
---
|
||||||
|
### ::: ultralytics.yolo.utils.downloads.get_github_assets
|
||||||
|
<br><br>
|
||||||
|
|
||||||
## attempt_download_asset
|
## attempt_download_asset
|
||||||
---
|
---
|
||||||
### ::: ultralytics.yolo.utils.downloads.attempt_download_asset
|
### ::: ultralytics.yolo.utils.downloads.attempt_download_asset
|
||||||
|
Loading…
x
Reference in New Issue
Block a user