mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-23 13:34:23 +08:00
docs: update README.md
This commit is contained in:
parent
bfef71d80b
commit
bb4327529d
208
README.md
208
README.md
@ -1,7 +1,7 @@
|
|||||||
# [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458)
|
# YOLOv10のファインチューニング
|
||||||
|
|
||||||
|
|
||||||
Official PyTorch implementation of **YOLOv10**.
|
公式のリポジトリからフォークして、独自のデータセットでファインチューニングを行うためのリポジトリです。
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="figures/latency.svg" width=48%>
|
<img src="figures/latency.svg" width=48%>
|
||||||
@ -10,35 +10,7 @@ Official PyTorch implementation of **YOLOv10**.
|
|||||||
</p>
|
</p>
|
||||||
|
|
||||||
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458).\
|
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458).\
|
||||||
Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding\
|
Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding
|
||||||
[](https://arxiv.org/abs/2405.14458) <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2) [](https://huggingface.co/spaces/jameslahm/YOLOv10) [](https://huggingface.co/spaces/kadirnar/Yolov10) [](https://huggingface.co/spaces/Xenova/yolov10-web) [](https://learnopencv.com/yolov10/) [](https://openbayes.com/console/public/tutorials/im29uYrnIoz)
|
|
||||||
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>
|
|
||||||
<font size="+1">Abstract</font>
|
|
||||||
</summary>
|
|
||||||
Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance.
|
|
||||||
</details>
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
- 2024/05/31: Please use the [exported format](https://github.com/THU-MIG/yolov10?tab=readme-ov-file#export) for benchmark. In the non-exported format, e.g., pytorch, the speed of YOLOv10 is biased because the unnecessary `cv2` and `cv3` operations in the `v10Detect` are executed during inference.
|
|
||||||
- 2024/05/30: We provide [some clarifications and suggestions](https://github.com/THU-MIG/yolov10/issues/136) for detecting smaller objects or objects in the distance with YOLOv10. Thanks to [SkalskiP](https://github.com/SkalskiP)!
|
|
||||||
- 2024/05/27: We have updated the [checkpoints](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2) with class names, for ease of use.
|
|
||||||
|
|
||||||
## UPDATES 🔥
|
|
||||||
- 2024/06/01: Thanks to [ErlanggaYudiPradana](https://github.com/rlggyp) for the integration with [C++ | OpenVINO | OpenCV](https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference)
|
|
||||||
- 2024/06/01: Thanks to [NielsRogge](https://github.com/NielsRogge) and [AK](https://x.com/_akhaliq) for hosting the models on the HuggingFace Hub!
|
|
||||||
- 2024/05/31: Build [yolov10-jetson](https://github.com/Seeed-Projects/jetson-examples/blob/main/reComputer/scripts/yolov10/README.md) docker image by [youjiang](https://github.com/yuyoujiang)!
|
|
||||||
- 2024/05/31: Thanks to [mohamedsamirx](https://github.com/mohamedsamirx) for the integration with [BoTSORT, DeepOCSORT, OCSORT, HybridSORT, ByteTrack, StrongSORT using BoxMOT library](https://colab.research.google.com/drive/1-QV2TNfqaMsh14w5VxieEyanugVBG14V?usp=sharing)!
|
|
||||||
- 2024/05/31: Thanks to [kaylorchen](https://github.com/kaylorchen) for the integration with [rk3588](https://github.com/kaylorchen/rk3588-yolo-demo)!
|
|
||||||
- 2024/05/30: Thanks to [eaidova](https://github.com/eaidova) for the integration with [OpenVINO™](https://github.com/openvinotoolkit/openvino_notebooks/blob/0ba3c0211bcd49aa860369feddffdf7273a73c64/notebooks/yolov10-optimization/yolov10-optimization.ipynb)!
|
|
||||||
- 2024/05/29: Add the gradio demo for running the models locally. Thanks to [AK](https://x.com/_akhaliq)!
|
|
||||||
- 2024/05/27: Thanks to [sujanshresstha](sujanshresstha) for the integration with [DeepSORT](https://github.com/sujanshresstha/YOLOv10_DeepSORT.git)!
|
|
||||||
- 2024/05/26: Thanks to [CVHub520](https://github.com/CVHub520) for the integration into [X-AnyLabeling](https://github.com/CVHub520/X-AnyLabeling)!
|
|
||||||
- 2024/05/26: Thanks to [DanielSarmiento04](https://github.com/DanielSarmiento04) for integrate in [c++ | ONNX | OPENCV](https://github.com/DanielSarmiento04/yolov10cpp)!
|
|
||||||
- 2024/05/25: Add [Transformers.js demo](https://huggingface.co/spaces/Xenova/yolov10-web) and onnx weights(yolov10[n](https://huggingface.co/onnx-community/yolov10n)/[s](https://huggingface.co/onnx-community/yolov10s)/[m](https://huggingface.co/onnx-community/yolov10m)/[b](https://huggingface.co/onnx-community/yolov10b)/[l](https://huggingface.co/onnx-community/yolov10l)/[x](https://huggingface.co/onnx-community/yolov10x)). Thanks to [xenova](https://github.com/xenova)!
|
|
||||||
- 2024/05/25: Add [colab demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s), [HuggingFace Demo](https://huggingface.co/spaces/kadirnar/Yolov10), and [HuggingFace Model Page](https://huggingface.co/kadirnar/Yolov10). Thanks to [SkalskiP](https://github.com/SkalskiP) and [kadirnar](https://github.com/kadirnar)!
|
|
||||||
|
|
||||||
## Performance
|
## Performance
|
||||||
COCO
|
COCO
|
||||||
@ -53,60 +25,111 @@ COCO
|
|||||||
| [YOLOv10-X](https://huggingface.co/jameslahm/yolov10x) | 640 | 29.5M | 160.4G | 54.4% | 10.70ms |
|
| [YOLOv10-X](https://huggingface.co/jameslahm/yolov10x) | 640 | 29.5M | 160.4G | 54.4% | 10.70ms |
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
`conda` virtual environment is recommended.
|
|
||||||
|
## 環境
|
||||||
|
|
||||||
|
- pyenv
|
||||||
|
- Python 3.9.13 (公式のバージョンと合わせる)
|
||||||
|
- cuda 11.8
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### 1. リポジトリをクローン
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone git@github.com:TechC-SugarCane/train-YOLOv10.git
|
||||||
|
|
||||||
|
cd train-YOLOv10
|
||||||
```
|
```
|
||||||
conda create -n yolov10 python=3.9
|
|
||||||
conda activate yolov10
|
### 2. Pythonの環境構築
|
||||||
pip install -r requirements.txt
|
|
||||||
|
```bash
|
||||||
|
pyenv install
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. 仮想環境を作成
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m venv .venv
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. 仮想環境を有効化
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# mac
|
||||||
|
source .venv/bin/activate
|
||||||
|
|
||||||
|
# windows
|
||||||
|
.venv\Scripts\activate
|
||||||
|
```
|
||||||
|
|
||||||
|
※ 環境から抜ける場合は、`deactivate`コマンドを実行してください。
|
||||||
|
|
||||||
|
### 5. 依存パッケージをインストール
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# CPUで推論を行う場合
|
||||||
|
pip install -r requirements-cpu.txt
|
||||||
|
|
||||||
|
# GPUで推論を行う場合
|
||||||
|
pip install -r requirements-gpu.txt
|
||||||
|
|
||||||
|
# 共通
|
||||||
pip install -e .
|
pip install -e .
|
||||||
```
|
```
|
||||||
## Demo
|
|
||||||
```
|
### 6. デフォルトセッティングを変更
|
||||||
python app.py
|
|
||||||
# Please visit http://127.0.0.1:7860
|
```bash
|
||||||
|
# datasetsのディレクトリを現在のディレクトリに変更
|
||||||
|
# デフォルトだと../datasetsが設定されている
|
||||||
|
yolo settings datasets_dir=.
|
||||||
```
|
```
|
||||||
|
|
||||||
## Validation
|
## Training
|
||||||
[`yolov10n`](https://huggingface.co/jameslahm/yolov10n) [`yolov10s`](https://huggingface.co/jameslahm/yolov10s) [`yolov10m`](https://huggingface.co/jameslahm/yolov10m) [`yolov10b`](https://huggingface.co/jameslahm/yolov10b) [`yolov10l`](https://huggingface.co/jameslahm/yolov10l) [`yolov10x`](https://huggingface.co/jameslahm/yolov10x)
|
|
||||||
```
|
事前学習済みモデルとして`yolov10x.pt`を使用するので、[公式GitHubのリリース](https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10x.pt)からダウンロードして`weights`ディレクトリに配置してください。
|
||||||
yolo val model=jameslahm/yolov10{n/s/m/b/l/x} data=coco.yaml batch=256
|
|
||||||
|
また、学習に使用するデータセットは[`datasets/README.md`](./datasets/README.md)に従い、`datasets`ディレクトリに配置してください。
|
||||||
|
|
||||||
|
学習後の結果は`runs/detect/<name(番号)>`に保存されます。
|
||||||
|
|
||||||
|
学習でよいスコアが出た場合は、`runs/detect/<name(番号)>/`にREADME.mdを作成してください。
|
||||||
|
その際は、[`runs/detect/README.md`](./runs/detect/README.md)を参考に作成してください。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# sugarcane
|
||||||
|
yolo detect train cfg='cfg/sugarcane.yaml' data=datasets/sugarcane/data.yaml model=weights/yolov10x.pt name='yolov10x-sugarcane' epochs=300 batch=16 imgsz=640 device=0
|
||||||
|
|
||||||
|
# pineapple
|
||||||
|
yolo detect train cfg='cfg/pineapple.yaml' data=datasets/pineapple/data.yaml model=weights/yolov10x.pt name='yolov10x-pineapple' epochs=300 batch=16 imgsz=640 device=0
|
||||||
```
|
```
|
||||||
|
|
||||||
Or
|
※ 上記を実行すると`yolov8n.pt`がダウンロードされますが、AMPというものの確認用に追加されているだけらしいので気にしなくて大丈夫です。
|
||||||
```python
|
詳しくは[#106](https://github.com/THU-MIG/yolov10/issues/106)を参照してください。
|
||||||
from ultralytics import YOLOv10
|
|
||||||
|
|
||||||
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
ハイパーパラメーターは自由に調整してください。`cfg/`にあります。このファイルの`Hyperparameters`の部分でハイパラ関連の設定ができます。
|
||||||
# or
|
|
||||||
# wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
|
|
||||||
model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
|
||||||
|
|
||||||
model.val(data='coco.yaml', batch=256)
|
- サトウキビ: `sugarcane.yaml`
|
||||||
```
|
- パイナップル: `pineapple.yaml`
|
||||||
|
|
||||||
|
## コントリビューター向けガイドライン
|
||||||
|
|
||||||
## Training
|
コントリビューター向けのガイドラインについては、こちらの[CONTRIBUTING.md](https://github.com/TechC-SugarCane/.github/blob/main/CONTRIBUTING.md)を参照してください。
|
||||||
```
|
|
||||||
yolo detect train data=coco.yaml model=yolov10n/s/m/b/l/x.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7
|
|
||||||
```
|
|
||||||
|
|
||||||
Or
|
### ※ 注意
|
||||||
```python
|
|
||||||
from ultralytics import YOLOv10
|
|
||||||
|
|
||||||
model = YOLOv10()
|
このリポジトリはforkなので、Pull Requestを送る際はこのリポジトリに対して送るようにしてください。
|
||||||
# If you want to finetune the model with pretrained weights, you could load the
|
|
||||||
# pretrained weights like below
|
|
||||||
# model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
|
||||||
# or
|
|
||||||
# wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
|
|
||||||
# model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
|
||||||
|
|
||||||
model.train(data='coco.yaml', epochs=500, batch=256, imgsz=640)
|
デフォルトだとbaseリポジトリが公式のリポジトリになっているので、注意してください。
|
||||||
```
|
|
||||||
|
`Comparing changes`でのドロップダウン(`base repository`)を、`TechC-SugarCane/train-YOLOv10`に変更してください。画面が遷移したら大丈夫です。
|
||||||
|
|
||||||
## Push to hub to 🤗
|
## Push to hub to 🤗
|
||||||
|
|
||||||
|
後で活用
|
||||||
|
|
||||||
Optionally, you can push your fine-tuned model to the [Hugging Face hub](https://huggingface.co/) as a public or private model:
|
Optionally, you can push your fine-tuned model to the [Hugging Face hub](https://huggingface.co/) as a public or private model:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -117,25 +140,8 @@ model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-dete
|
|||||||
model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-detection", private=True)
|
model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-detection", private=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Prediction
|
|
||||||
Note that a smaller confidence threshold can be set to detect smaller objects or objects in the distance. Please refer to [here](https://github.com/THU-MIG/yolov10/issues/136) for details.
|
|
||||||
```
|
|
||||||
yolo predict model=jameslahm/yolov10{n/s/m/b/l/x}
|
|
||||||
```
|
|
||||||
|
|
||||||
Or
|
|
||||||
```python
|
|
||||||
from ultralytics import YOLOv10
|
|
||||||
|
|
||||||
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
|
||||||
# or
|
|
||||||
# wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
|
|
||||||
model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
|
||||||
|
|
||||||
model.predict()
|
|
||||||
```
|
|
||||||
|
|
||||||
## Export
|
## Export
|
||||||
|
後で活用
|
||||||
```
|
```
|
||||||
# End-to-End ONNX
|
# End-to-End ONNX
|
||||||
yolo export model=jameslahm/yolov10{n/s/m/b/l/x} format=onnx opset=13 simplify
|
yolo export model=jameslahm/yolov10{n/s/m/b/l/x} format=onnx opset=13 simplify
|
||||||
@ -149,33 +155,3 @@ trtexec --onnx=yolov10n/s/m/b/l/x.onnx --saveEngine=yolov10n/s/m/b/l/x.engine --
|
|||||||
# Predict with TensorRT
|
# Predict with TensorRT
|
||||||
yolo predict model=yolov10n/s/m/b/l/x.engine
|
yolo predict model=yolov10n/s/m/b/l/x.engine
|
||||||
```
|
```
|
||||||
|
|
||||||
Or
|
|
||||||
```python
|
|
||||||
from ultralytics import YOLOv10
|
|
||||||
|
|
||||||
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
|
||||||
# or
|
|
||||||
# wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
|
|
||||||
model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
|
|
||||||
|
|
||||||
model.export(...)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Acknowledgement
|
|
||||||
|
|
||||||
The code base is built with [ultralytics](https://github.com/ultralytics/ultralytics) and [RT-DETR](https://github.com/lyuwenyu/RT-DETR).
|
|
||||||
|
|
||||||
Thanks for the great implementations!
|
|
||||||
|
|
||||||
## Citation
|
|
||||||
|
|
||||||
If our code or models help your work, please cite our paper:
|
|
||||||
```BibTeX
|
|
||||||
@article{wang2024yolov10,
|
|
||||||
title={YOLOv10: Real-Time End-to-End Object Detection},
|
|
||||||
author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
|
|
||||||
journal={arXiv preprint arXiv:2405.14458},
|
|
||||||
year={2024}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user