mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-07-17 04:35:39 +08:00
conflicts resolved
This commit is contained in:
commit
c192502a26
108
README.md
108
README.md
@ -11,7 +11,7 @@ Official PyTorch implementation of **YOLOv10**.
|
|||||||
|
|
||||||
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458).\
|
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458).\
|
||||||
Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding\
|
Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding\
|
||||||
[](https://arxiv.org/abs/2405.14458) <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [](https://huggingface.co/spaces/kadirnar/Yolov10) [](https://huggingface.co/spaces/Xenova/yolov10-web)
|
[](https://arxiv.org/abs/2405.14458) <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2) [](https://huggingface.co/spaces/jameslahm/YOLOv10) [](https://huggingface.co/spaces/kadirnar/Yolov10) [](https://huggingface.co/spaces/Xenova/yolov10-web)
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>
|
<summary>
|
||||||
@ -20,10 +20,20 @@ Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Din
|
|||||||
Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance.
|
Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance.
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
**UPDATES** 🔥
|
## Notes
|
||||||
|
- 2024/05/31: Please use the [exported format](https://github.com/THU-MIG/yolov10?tab=readme-ov-file#export) for benchmark. In the non-exported format, e.g., pytorch, the speed of YOLOv10 is biased because the unnecessary `cv2` and `cv3` operations in the `v10Detect` are executed during inference.
|
||||||
|
- 2024/05/30: We provide [some clarifications and suggestions](https://github.com/THU-MIG/yolov10/issues/136) for detecting smaller objects or objects in the distance with YOLOv10. Thanks to [SkalskiP](https://github.com/SkalskiP)!
|
||||||
|
- 2024/05/27: We have updated the [checkpoints](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2) with class names, for ease of use.
|
||||||
|
|
||||||
|
## UPDATES 🔥
|
||||||
|
- 2024/06/01: Thanks to [ErlanggaYudiPradana](https://github.com/rlggyp) for the integration with [C++ | OpenVINO | OpenCV](https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference)
|
||||||
|
- 2024/06/01: Thanks to [NielsRogge](https://github.com/NielsRogge) and [AK](https://x.com/_akhaliq) for hosting the models on the HuggingFace Hub!
|
||||||
|
- 2024/05/31: Build [yolov10-jetson](https://github.com/Seeed-Projects/jetson-examples/blob/main/reComputer/scripts/yolov10/README.md) docker image by [youjiang](https://github.com/yuyoujiang)!
|
||||||
|
- 2024/05/31: Thanks to [mohamedsamirx](https://github.com/mohamedsamirx) for the integration with [BoTSORT, DeepOCSORT, OCSORT, HybridSORT, ByteTrack, StrongSORT using BoxMOT library](https://colab.research.google.com/drive/1-QV2TNfqaMsh14w5VxieEyanugVBG14V?usp=sharing)!
|
||||||
|
- 2024/05/31: Thanks to [kaylorchen](https://github.com/kaylorchen) for the integration with [rk3588](https://github.com/kaylorchen/rk3588-yolo-demo)!
|
||||||
|
- 2024/05/30: Thanks to [eaidova](https://github.com/eaidova) for the integration with [OpenVINO™](https://github.com/openvinotoolkit/openvino_notebooks/blob/0ba3c0211bcd49aa860369feddffdf7273a73c64/notebooks/yolov10-optimization/yolov10-optimization.ipynb)!
|
||||||
- 2024/05/29: Add the gradio demo for running the models locally. Thanks to [AK](https://x.com/_akhaliq)!
|
- 2024/05/29: Add the gradio demo for running the models locally. Thanks to [AK](https://x.com/_akhaliq)!
|
||||||
- 2024/05/27: Thanks to [sujanshresstha](sujanshresstha) for the integration with [DeepSORT](https://github.com/sujanshresstha/YOLOv10_DeepSORT.git)!
|
- 2024/05/27: Thanks to [sujanshresstha](sujanshresstha) for the integration with [DeepSORT](https://github.com/sujanshresstha/YOLOv10_DeepSORT.git)!
|
||||||
- 2024/05/27: We have updated the [checkpoints](https://github.com/THU-MIG/yolov10/releases/tag/v1.1) with other attributes, like class names, for ease of use.
|
|
||||||
- 2024/05/26: Thanks to [CVHub520](https://github.com/CVHub520) for the integration into [X-AnyLabeling](https://github.com/CVHub520/X-AnyLabeling)!
|
- 2024/05/26: Thanks to [CVHub520](https://github.com/CVHub520) for the integration into [X-AnyLabeling](https://github.com/CVHub520/X-AnyLabeling)!
|
||||||
- 2024/05/26: Thanks to [DanielSarmiento04](https://github.com/DanielSarmiento04) for integrate in [c++ | ONNX | OPENCV](https://github.com/DanielSarmiento04/yolov10cpp)!
|
- 2024/05/26: Thanks to [DanielSarmiento04](https://github.com/DanielSarmiento04) for integrate in [c++ | ONNX | OPENCV](https://github.com/DanielSarmiento04/yolov10cpp)!
|
||||||
- 2024/05/25: Add [Transformers.js demo](https://huggingface.co/spaces/Xenova/yolov10-web) and onnx weights(yolov10[n](https://huggingface.co/onnx-community/yolov10n)/[s](https://huggingface.co/onnx-community/yolov10s)/[m](https://huggingface.co/onnx-community/yolov10m)/[b](https://huggingface.co/onnx-community/yolov10b)/[l](https://huggingface.co/onnx-community/yolov10l)/[x](https://huggingface.co/onnx-community/yolov10x)). Thanks to [xenova](https://github.com/xenova)!
|
- 2024/05/25: Add [Transformers.js demo](https://huggingface.co/spaces/Xenova/yolov10-web) and onnx weights(yolov10[n](https://huggingface.co/onnx-community/yolov10n)/[s](https://huggingface.co/onnx-community/yolov10s)/[m](https://huggingface.co/onnx-community/yolov10m)/[b](https://huggingface.co/onnx-community/yolov10b)/[l](https://huggingface.co/onnx-community/yolov10l)/[x](https://huggingface.co/onnx-community/yolov10x)). Thanks to [xenova](https://github.com/xenova)!
|
||||||
@ -31,14 +41,15 @@ Over the past years, YOLOs have emerged as the predominant paradigm in the field
|
|||||||
|
|
||||||
## Performance
|
## Performance
|
||||||
COCO
|
COCO
|
||||||
|
|
||||||
| Model | Test Size | #Params | FLOPs | AP<sup>val</sup> | Latency |
|
| Model | Test Size | #Params | FLOPs | AP<sup>val</sup> | Latency |
|
||||||
|:---------------|:----:|:---:|:--:|:--:|:--:|
|
|:---------------|:----:|:---:|:--:|:--:|:--:|
|
||||||
| [YOLOv10-N](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10n.pt) | 640 | 2.3M | 6.7G | 38.5% | 1.84ms |
|
| [YOLOv10-N](https://huggingface.co/jameslahm/yolov10n) | 640 | 2.3M | 6.7G | 38.5% | 1.84ms |
|
||||||
| [YOLOv10-S](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10s.pt) | 640 | 7.2M | 21.6G | 46.3% | 2.49ms |
|
| [YOLOv10-S](https://huggingface.co/jameslahm/yolov10s) | 640 | 7.2M | 21.6G | 46.3% | 2.49ms |
|
||||||
| [YOLOv10-M](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10m.pt) | 640 | 15.4M | 59.1G | 51.1% | 4.74ms |
|
| [YOLOv10-M](https://huggingface.co/jameslahm/yolov10m) | 640 | 15.4M | 59.1G | 51.1% | 4.74ms |
|
||||||
| [YOLOv10-B](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10b.pt) | 640 | 19.1M | 92.0G | 52.5% | 5.74ms |
|
| [YOLOv10-B](https://huggingface.co/jameslahm/yolov10b) | 640 | 19.1M | 92.0G | 52.5% | 5.74ms |
|
||||||
| [YOLOv10-L](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10l.pt) | 640 | 24.4M | 120.3G | 53.2% | 7.28ms |
|
| [YOLOv10-L](https://huggingface.co/jameslahm/yolov10l) | 640 | 24.4M | 120.3G | 53.2% | 7.28ms |
|
||||||
| [YOLOv10-X](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10x.pt) | 640 | 29.5M | 160.4G | 54.4% | 10.70ms |
|
| [YOLOv10-X](https://huggingface.co/jameslahm/yolov10x) | 640 | 29.5M | 160.4G | 54.4% | 10.70ms |
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
`conda` virtual environment is recommended.
|
`conda` virtual environment is recommended.
|
||||||
@ -50,27 +61,77 @@ pip install -e .
|
|||||||
```
|
```
|
||||||
## Demo
|
## Demo
|
||||||
```
|
```
|
||||||
wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10s.pt
|
|
||||||
python app.py
|
python app.py
|
||||||
# Please visit http://127.0.0.1:7860
|
# Please visit http://127.0.0.1:7860
|
||||||
```
|
```
|
||||||
|
|
||||||
## Validation
|
## Validation
|
||||||
[`yolov10n.pt`](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10n.pt) [`yolov10s.pt`](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10s.pt) [`yolov10m.pt`](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10m.pt) [`yolov10b.pt`](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10b.pt) [`yolov10l.pt`](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10l.pt) [`yolov10x.pt`](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10x.pt)
|
[`yolov10n`](https://huggingface.co/jameslahm/yolov10n) [`yolov10s`](https://huggingface.co/jameslahm/yolov10s) [`yolov10m`](https://huggingface.co/jameslahm/yolov10m) [`yolov10b`](https://huggingface.co/jameslahm/yolov10b) [`yolov10l`](https://huggingface.co/jameslahm/yolov10l) [`yolov10x`](https://huggingface.co/jameslahm/yolov10x)
|
||||||
```
|
```
|
||||||
|
wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10s.pt
|
||||||
yolo val model=yolov10n/s/m/b/l/x.pt data=coco.yaml batch=256
|
yolo val model=yolov10n/s/m/b/l/x.pt data=coco.yaml batch=256
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Or
|
||||||
|
```python
|
||||||
|
from ultralytics import YOLOv10
|
||||||
|
|
||||||
|
model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
||||||
|
# or
|
||||||
|
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
||||||
|
|
||||||
|
model.val(data='coco.yaml', batch=256)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## Training
|
## Training
|
||||||
```
|
```
|
||||||
yolo detect train data=coco.yaml model=yolov10n/s/m/b/l/x.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7
|
yolo detect train data=coco.yaml model=yolov10n/s/m/b/l/x.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Or
|
||||||
|
```python
|
||||||
|
from ultralytics import YOLOv10
|
||||||
|
|
||||||
|
model = YOLOv10()
|
||||||
|
# If you want to finetune the model with pretrained weights, you could load the
|
||||||
|
# pretrained weights like below
|
||||||
|
# model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
||||||
|
# or
|
||||||
|
# model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
||||||
|
|
||||||
|
model.train(data='coco.yaml', epochs=500, batch=256, imgsz=640)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Push to hub to 🤗
|
||||||
|
|
||||||
|
Optionally, you can push your fine-tuned model to the [Hugging Face hub](https://huggingface.co/) as a public or private model:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# let's say you have fine-tuned a model for crop detection
|
||||||
|
model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-detection")
|
||||||
|
|
||||||
|
# you can also pass `private=True` if you don't want everyone to see your model
|
||||||
|
model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-detection", private=True)
|
||||||
|
```
|
||||||
|
|
||||||
## Prediction
|
## Prediction
|
||||||
|
Note that a smaller confidence threshold can be set to detect smaller objects or objects in the distance. Please refer to [here](https://github.com/THU-MIG/yolov10/issues/136) for details.
|
||||||
```
|
```
|
||||||
yolo predict model=yolov10n/s/m/b/l/x.pt
|
yolo predict model=yolov10n/s/m/b/l/x.pt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Or
|
||||||
|
```python
|
||||||
|
from ultralytics import YOLOv10
|
||||||
|
|
||||||
|
model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
||||||
|
# or
|
||||||
|
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
||||||
|
|
||||||
|
model.predict()
|
||||||
|
```
|
||||||
|
|
||||||
## Export
|
## Export
|
||||||
```
|
```
|
||||||
# End-to-End ONNX
|
# End-to-End ONNX
|
||||||
@ -80,12 +141,23 @@ yolo predict model=yolov10n/s/m/b/l/x.onnx
|
|||||||
|
|
||||||
# End-to-End TensorRT
|
# End-to-End TensorRT
|
||||||
yolo export model=yolov10n/s/m/b/l/x.pt format=engine half=True simplify opset=13 workspace=16
|
yolo export model=yolov10n/s/m/b/l/x.pt format=engine half=True simplify opset=13 workspace=16
|
||||||
# Or
|
# or
|
||||||
trtexec --onnx=yolov10n/s/m/b/l/x.onnx --saveEngine=yolov10n/s/m/b/l/x.engine --fp16
|
trtexec --onnx=yolov10n/s/m/b/l/x.onnx --saveEngine=yolov10n/s/m/b/l/x.engine --fp16
|
||||||
# Predict with TensorRT
|
# Predict with TensorRT
|
||||||
yolo predict model=yolov10n/s/m/b/l/x.engine
|
yolo predict model=yolov10n/s/m/b/l/x.engine
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Or
|
||||||
|
```python
|
||||||
|
from ultralytics import YOLOv10
|
||||||
|
|
||||||
|
model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
||||||
|
# or
|
||||||
|
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}.pt')
|
||||||
|
|
||||||
|
model.export(...)
|
||||||
|
```
|
||||||
|
|
||||||
## Acknowledgement
|
## Acknowledgement
|
||||||
|
|
||||||
The code base is built with [ultralytics](https://github.com/ultralytics/ultralytics) and [RT-DETR](https://github.com/lyuwenyu/RT-DETR).
|
The code base is built with [ultralytics](https://github.com/ultralytics/ultralytics) and [RT-DETR](https://github.com/lyuwenyu/RT-DETR).
|
||||||
@ -96,12 +168,10 @@ Thanks for the great implementations!
|
|||||||
|
|
||||||
If our code or models help your work, please cite our paper:
|
If our code or models help your work, please cite our paper:
|
||||||
```BibTeX
|
```BibTeX
|
||||||
@misc{wang2024yolov10,
|
@article{wang2024yolov10,
|
||||||
title={YOLOv10: Real-Time End-to-End Object Detection},
|
title={YOLOv10: Real-Time End-to-End Object Detection},
|
||||||
author={Ao Wang and Hui Chen and Lihao Liu and Kai Chen and Zijia Lin and Jungong Han and Guiguang Ding},
|
author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
|
||||||
year={2024},
|
journal={arXiv preprint arXiv:2405.14458},
|
||||||
eprint={2405.14458},
|
year={2024}
|
||||||
archivePrefix={arXiv},
|
|
||||||
primaryClass={cs.CV}
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
32
app.py
32
app.py
@ -1,14 +1,11 @@
|
|||||||
# Acknowledgement: https://huggingface.co/spaces/kadirnar/Yolov10/blob/main/app.py
|
|
||||||
# Thanks to @kadirnar
|
|
||||||
|
|
||||||
import gradio as gr
|
import gradio as gr
|
||||||
from ultralytics import YOLOv10
|
|
||||||
import cv2
|
import cv2
|
||||||
import tempfile
|
import tempfile
|
||||||
|
from ultralytics import YOLOv10
|
||||||
|
|
||||||
|
|
||||||
def yolov10_inference(image, video, model_path, image_size, conf_threshold):
|
def yolov10_inference(image, video, model_id, image_size, conf_threshold):
|
||||||
model = YOLOv10(model_path)
|
model = YOLOv10.from_pretrained(f'jameslahm/{model_id}')
|
||||||
if image:
|
if image:
|
||||||
results = model.predict(source=image, imgsz=image_size, conf=conf_threshold)
|
results = model.predict(source=image, imgsz=image_size, conf=conf_threshold)
|
||||||
annotated_image = results[0].plot()
|
annotated_image = results[0].plot()
|
||||||
@ -62,14 +59,14 @@ def app():
|
|||||||
model_id = gr.Dropdown(
|
model_id = gr.Dropdown(
|
||||||
label="Model",
|
label="Model",
|
||||||
choices=[
|
choices=[
|
||||||
"yolov10n.pt",
|
"yolov10n",
|
||||||
"yolov10s.pt",
|
"yolov10s",
|
||||||
"yolov10m.pt",
|
"yolov10m",
|
||||||
"yolov10b.pt",
|
"yolov10b",
|
||||||
"yolov10l.pt",
|
"yolov10l",
|
||||||
"yolov10x.pt",
|
"yolov10x",
|
||||||
],
|
],
|
||||||
value="yolov10s.pt",
|
value="yolov10m",
|
||||||
)
|
)
|
||||||
image_size = gr.Slider(
|
image_size = gr.Slider(
|
||||||
label="Image Size",
|
label="Image Size",
|
||||||
@ -82,7 +79,7 @@ def app():
|
|||||||
label="Confidence Threshold",
|
label="Confidence Threshold",
|
||||||
minimum=0.0,
|
minimum=0.0,
|
||||||
maximum=1.0,
|
maximum=1.0,
|
||||||
step=0.1,
|
step=0.05,
|
||||||
value=0.25,
|
value=0.25,
|
||||||
)
|
)
|
||||||
yolov10_infer = gr.Button(value="Detect Objects")
|
yolov10_infer = gr.Button(value="Detect Objects")
|
||||||
@ -111,6 +108,7 @@ def app():
|
|||||||
else:
|
else:
|
||||||
return yolov10_inference(None, video, model_id, image_size, conf_threshold)
|
return yolov10_inference(None, video, model_id, image_size, conf_threshold)
|
||||||
|
|
||||||
|
|
||||||
yolov10_infer.click(
|
yolov10_infer.click(
|
||||||
fn=run_inference,
|
fn=run_inference,
|
||||||
inputs=[image, video, model_id, image_size, conf_threshold, input_type],
|
inputs=[image, video, model_id, image_size, conf_threshold, input_type],
|
||||||
@ -121,13 +119,13 @@ def app():
|
|||||||
examples=[
|
examples=[
|
||||||
[
|
[
|
||||||
"ultralytics/assets/bus.jpg",
|
"ultralytics/assets/bus.jpg",
|
||||||
"yolov10s.pt",
|
"yolov10s",
|
||||||
640,
|
640,
|
||||||
0.25,
|
0.25,
|
||||||
],
|
],
|
||||||
[
|
[
|
||||||
"ultralytics/assets/zidane.jpg",
|
"ultralytics/assets/zidane.jpg",
|
||||||
"yolov10s.pt",
|
"yolov10s",
|
||||||
640,
|
640,
|
||||||
0.25,
|
0.25,
|
||||||
],
|
],
|
||||||
@ -140,7 +138,7 @@ def app():
|
|||||||
conf_threshold,
|
conf_threshold,
|
||||||
],
|
],
|
||||||
outputs=[output_image],
|
outputs=[output_image],
|
||||||
cache_examples=True,
|
cache_examples='lazy',
|
||||||
)
|
)
|
||||||
|
|
||||||
gradio_app = gr.Blocks()
|
gradio_app = gr.Blocks()
|
||||||
|
8
flops.py
Normal file
8
flops.py
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
from ultralytics import YOLOv10
|
||||||
|
|
||||||
|
model = YOLOv10('yolov10n.yaml')
|
||||||
|
model.model.model[-1].export = True
|
||||||
|
model.model.model[-1].format = 'onnx'
|
||||||
|
del model.model.model[-1].cv2
|
||||||
|
del model.model.model[-1].cv3
|
||||||
|
model.fuse()
|
@ -11,3 +11,5 @@ gradio==4.31.5
|
|||||||
opencv-python==4.9.0.80
|
opencv-python==4.9.0.80
|
||||||
psutil==5.9.8
|
psutil==5.9.8
|
||||||
py-cpuinfo==9.0.0
|
py-cpuinfo==9.0.0
|
||||||
|
huggingface-hub==0.23.2
|
||||||
|
safetensors==0.4.3
|
@ -4,10 +4,23 @@ from .val import YOLOv10DetectionValidator
|
|||||||
from .predict import YOLOv10DetectionPredictor
|
from .predict import YOLOv10DetectionPredictor
|
||||||
from .train import YOLOv10DetectionTrainer
|
from .train import YOLOv10DetectionTrainer
|
||||||
|
|
||||||
class YOLOv10(Model):
|
from huggingface_hub import PyTorchModelHubMixin
|
||||||
|
|
||||||
def __init__(self, model="yolov10n.pt", task=None, verbose=False):
|
class YOLOv10(Model, PyTorchModelHubMixin, library_name="ultralytics", repo_url="https://github.com/THU-MIG/yolov10", tags=["object-detection", "yolov10"]):
|
||||||
|
|
||||||
|
def __init__(self, model="yolov10n.pt", task=None, verbose=False,
|
||||||
|
names=None):
|
||||||
super().__init__(model=model, task=task, verbose=verbose)
|
super().__init__(model=model, task=task, verbose=verbose)
|
||||||
|
if names is not None:
|
||||||
|
setattr(self.model, 'names', names)
|
||||||
|
|
||||||
|
def push_to_hub(self, repo_name, **kwargs):
|
||||||
|
config = kwargs.get('config', {})
|
||||||
|
config['names'] = self.names
|
||||||
|
config['model'] = self.model.yaml['yaml_file']
|
||||||
|
config['task'] = self.task
|
||||||
|
kwargs['config'] = config
|
||||||
|
super().push_to_hub(repo_name, **kwargs)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def task_map(self):
|
def task_map(self):
|
||||||
|
@ -496,7 +496,7 @@ class RTDETRDecoder(nn.Module):
|
|||||||
|
|
||||||
class v10Detect(Detect):
|
class v10Detect(Detect):
|
||||||
|
|
||||||
max_det = -1
|
max_det = 300
|
||||||
|
|
||||||
def __init__(self, nc=80, ch=()):
|
def __init__(self, nc=80, ch=()):
|
||||||
super().__init__(nc, ch)
|
super().__init__(nc, ch)
|
||||||
|
@ -64,6 +64,9 @@ def box_iou(box1, box2, eps=1e-7):
|
|||||||
(torch.Tensor): An NxM tensor containing the pairwise IoU values for every element in box1 and box2.
|
(torch.Tensor): An NxM tensor containing the pairwise IoU values for every element in box1 and box2.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# NOTE: need float32 to get accurate iou values
|
||||||
|
box1 = torch.as_tensor(box1, dtype=torch.float32)
|
||||||
|
box2 = torch.as_tensor(box2, dtype=torch.float32)
|
||||||
# inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
|
# inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
|
||||||
(a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2)
|
(a1, a2), (b1, b2) = box1.unsqueeze(1).chunk(2, 2), box2.unsqueeze(0).chunk(2, 2)
|
||||||
inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp_(0).prod(2)
|
inter = (torch.min(a2, b2) - torch.max(a1, b1)).clamp_(0).prod(2)
|
||||||
|
Loading…
x
Reference in New Issue
Block a user