mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-23 05:24:22 +08:00
Compare commits
4 Commits
8b1a563061
...
3a9e54fb96
Author | SHA1 | Date | |
---|---|---|---|
![]() |
3a9e54fb96 | ||
![]() |
453c6e38a5 | ||
![]() |
27842e0eee | ||
![]() |
ae33af4bf2 |
@ -10,7 +10,7 @@ Please check out our new release on [**YOLOE**](https://github.com/THU-MIG/yoloe
|
||||
Comparison of performance, training cost, and inference efficiency between YOLOE (Ours) and YOLO-Worldv2 in terms of open text prompts.
|
||||
</p>
|
||||
|
||||
**YOLOE(ye)** is a highly **efficient**, **unified**, and **open** object detection and segmentation model for real-time seeing anything, like human eye, under different prompt mechanisms, like *texts*, *visual inputs*, and *prompt-free paradigm*.
|
||||
**YOLOE(ye)** is a highly **efficient**, **unified**, and **open** object detection and segmentation model for real-time seeing anything, like human eye, under different prompt mechanisms, like *texts*, *visual inputs*, and *prompt-free paradigm*, with **zero inference and transferring overhead** compared with closed-set YOLOs.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://github.com/THU-MIG/yoloe/blob/main/figures/visualization.svg" width=96%> <br>
|
||||
|
@ -6,10 +6,9 @@ pycocotools==2.0.7
|
||||
PyYAML==6.0.1
|
||||
scipy==1.13.0
|
||||
onnxslim==0.1.31
|
||||
onnxruntime-gpu==1.18.0
|
||||
gradio==4.31.5
|
||||
opencv-python==4.9.0.80
|
||||
psutil==5.9.8
|
||||
py-cpuinfo==9.0.0
|
||||
huggingface-hub==0.23.2
|
||||
safetensors==0.4.3
|
||||
safetensors==0.4.3
|
||||
|
Loading…
x
Reference in New Issue
Block a user