mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-23 21:44:22 +08:00
ultralytics 8.0.92
updates and fixes (#2361)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: introvin <vinod.4166@gmail.com> Co-authored-by: marinmarcillat <58145636+marinmarcillat@users.noreply.github.com> Co-authored-by: BIGBOSS-FOX <47949596+BIGBOSS-FOX@users.noreply.github.com>
This commit is contained in:
parent
3fd317edfd
commit
0ebd3f2959
@ -22,7 +22,7 @@ repos:
|
|||||||
- id: detect-private-key
|
- id: detect-private-key
|
||||||
|
|
||||||
- repo: https://github.com/asottile/pyupgrade
|
- repo: https://github.com/asottile/pyupgrade
|
||||||
rev: v3.3.1
|
rev: v3.3.2
|
||||||
hooks:
|
hooks:
|
||||||
- id: pyupgrade
|
- id: pyupgrade
|
||||||
name: Upgrade code
|
name: Upgrade code
|
||||||
@ -34,7 +34,7 @@ repos:
|
|||||||
name: Sort imports
|
name: Sort imports
|
||||||
|
|
||||||
- repo: https://github.com/google/yapf
|
- repo: https://github.com/google/yapf
|
||||||
rev: v0.32.0
|
rev: v0.33.0
|
||||||
hooks:
|
hooks:
|
||||||
- id: yapf
|
- id: yapf
|
||||||
name: YAPF formatting
|
name: YAPF formatting
|
||||||
|
@ -2,10 +2,6 @@
|
|||||||
comments: true
|
comments: true
|
||||||
---
|
---
|
||||||
|
|
||||||
---
|
|
||||||
comments: true
|
|
||||||
---
|
|
||||||
|
|
||||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||||
|
|
||||||
**Train mode** is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the
|
**Train mode** is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the
|
||||||
|
26
docs/overrides/partials/source-file.html
Normal file
26
docs/overrides/partials/source-file.html
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
{% import "partials/language.html" as lang with context %}
|
||||||
|
|
||||||
|
<!-- taken from
|
||||||
|
https://github.com/squidfunk/mkdocs-material/blob/master/src/partials/source-file.html -->
|
||||||
|
|
||||||
|
<br>
|
||||||
|
<div class="md-source-file">
|
||||||
|
<small>
|
||||||
|
|
||||||
|
<!-- mkdocs-git-revision-date-localized-plugin -->
|
||||||
|
{% if page.meta.git_revision_date_localized %}
|
||||||
|
📅 {{ lang.t("source.file.date.updated") }}:
|
||||||
|
{{ page.meta.git_revision_date_localized }}
|
||||||
|
{% if page.meta.git_creation_date_localized %}
|
||||||
|
<br />
|
||||||
|
🎂 {{ lang.t("source.file.date.created") }}:
|
||||||
|
{{ page.meta.git_creation_date_localized }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
<!-- mkdocs-git-revision-date-plugin -->
|
||||||
|
{% elif page.meta.revision_date %}
|
||||||
|
📅 {{ lang.t("source.file.date.updated") }}:
|
||||||
|
{{ page.meta.revision_date }}
|
||||||
|
{% endif %}
|
||||||
|
</small>
|
||||||
|
</div>
|
@ -96,7 +96,7 @@ CLI requires no customization or Python code. You can simply run all tasks from
|
|||||||
|
|
||||||
!!! warning "Warning"
|
!!! warning "Warning"
|
||||||
|
|
||||||
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` beteen arguments.
|
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` between arguments.
|
||||||
|
|
||||||
- `yolo predict model=yolov8n.pt imgsz=640 conf=0.25` ✅
|
- `yolo predict model=yolov8n.pt imgsz=640 conf=0.25` ✅
|
||||||
- `yolo predict model yolov8n.pt imgsz 640 conf 0.25` ❌
|
- `yolo predict model yolov8n.pt imgsz 640 conf 0.25` ❌
|
||||||
|
@ -32,3 +32,8 @@
|
|||||||
---
|
---
|
||||||
:::ultralytics.yolo.utils.plotting.output_to_target
|
:::ultralytics.yolo.utils.plotting.output_to_target
|
||||||
<br><br>
|
<br><br>
|
||||||
|
|
||||||
|
# feature_visualization
|
||||||
|
---
|
||||||
|
:::ultralytics.yolo.utils.plotting.feature_visualization
|
||||||
|
<br><br>
|
||||||
|
@ -77,10 +77,16 @@ see the [Configuration](../usage/cfg.md) page.
|
|||||||
The YOLO classification dataset format is same as the torchvision format. Each class of images has its own folder and you have to simply pass the path of the dataset folder, i.e, `yolo classify train data="path/to/dataset"`
|
The YOLO classification dataset format is same as the torchvision format. Each class of images has its own folder and you have to simply pass the path of the dataset folder, i.e, `yolo classify train data="path/to/dataset"`
|
||||||
```
|
```
|
||||||
dataset/
|
dataset/
|
||||||
├── class1/
|
├── train/
|
||||||
├── class2/
|
├──── class1/
|
||||||
├── class3/
|
├──── class2/
|
||||||
├── ...
|
├──── class3/
|
||||||
|
├──── ...
|
||||||
|
├── val/
|
||||||
|
├──── class1/
|
||||||
|
├──── class2/
|
||||||
|
├──── class3/
|
||||||
|
├──── ...
|
||||||
```
|
```
|
||||||
## Val
|
## Val
|
||||||
|
|
||||||
|
17
mkdocs.yml
17
mkdocs.yml
@ -12,6 +12,8 @@ theme:
|
|||||||
custom_dir: docs/overrides
|
custom_dir: docs/overrides
|
||||||
logo: https://github.com/ultralytics/assets/raw/main/logo/Ultralytics_Logotype_Reverse.svg
|
logo: https://github.com/ultralytics/assets/raw/main/logo/Ultralytics_Logotype_Reverse.svg
|
||||||
favicon: assets/favicon.ico
|
favicon: assets/favicon.ico
|
||||||
|
icon:
|
||||||
|
repo: fontawesome/brands/github
|
||||||
font:
|
font:
|
||||||
text: Roboto
|
text: Roboto
|
||||||
code: Roboto Mono
|
code: Roboto Mono
|
||||||
@ -55,6 +57,7 @@ copyright: <a href="https://ultralytics.com" target="_blank">Ultralytics 2023.</
|
|||||||
extra:
|
extra:
|
||||||
# version:
|
# version:
|
||||||
# provider: mike # version drop-down menu
|
# provider: mike # version drop-down menu
|
||||||
|
robots: robots.txt
|
||||||
analytics:
|
analytics:
|
||||||
provider: google
|
provider: google
|
||||||
property: G-2M5EHKC0BH
|
property: G-2M5EHKC0BH
|
||||||
@ -91,9 +94,6 @@ extra:
|
|||||||
extra_css:
|
extra_css:
|
||||||
- stylesheets/style.css
|
- stylesheets/style.css
|
||||||
|
|
||||||
extra_files:
|
|
||||||
- robots.txt
|
|
||||||
|
|
||||||
markdown_extensions:
|
markdown_extensions:
|
||||||
# Div text decorators
|
# Div text decorators
|
||||||
- admonition
|
- admonition
|
||||||
@ -289,6 +289,9 @@ nav:
|
|||||||
plugins:
|
plugins:
|
||||||
- mkdocstrings
|
- mkdocstrings
|
||||||
- search
|
- search
|
||||||
|
- git-revision-date-localized:
|
||||||
|
type: timeago
|
||||||
|
enable_creation_date: true
|
||||||
- redirects:
|
- redirects:
|
||||||
redirect_maps:
|
redirect_maps:
|
||||||
callbacks.md: usage/callbacks.md
|
callbacks.md: usage/callbacks.md
|
||||||
@ -338,6 +341,7 @@ plugins:
|
|||||||
yolov5/hyp_evolution.md: yolov5/tutorials/hyperparameter_evolution.md
|
yolov5/hyp_evolution.md: yolov5/tutorials/hyperparameter_evolution.md
|
||||||
yolov5/pruning_sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
|
yolov5/pruning_sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
|
||||||
yolov5/comet.md: yolov5/tutorials/comet_logging_integration.md
|
yolov5/comet.md: yolov5/tutorials/comet_logging_integration.md
|
||||||
|
yolov5/clearml.md: yolov5/tutorials/clearml_logging_integration.md
|
||||||
yolov5/tta.md: yolov5/tutorials/test_time_augmentation.md
|
yolov5/tta.md: yolov5/tutorials/test_time_augmentation.md
|
||||||
yolov5/multi_gpu_training.md: yolov5/tutorials/multi_gpu_training.md
|
yolov5/multi_gpu_training.md: yolov5/tutorials/multi_gpu_training.md
|
||||||
yolov5/ensemble.md: yolov5/tutorials/model_ensembling.md
|
yolov5/ensemble.md: yolov5/tutorials/model_ensembling.md
|
||||||
@ -351,3 +355,10 @@ plugins:
|
|||||||
yolov5/tutorials/yolov5_neural_magic_tutorial.md: yolov5/tutorials/neural_magic_pruning_quantization.md
|
yolov5/tutorials/yolov5_neural_magic_tutorial.md: yolov5/tutorials/neural_magic_pruning_quantization.md
|
||||||
yolov5/tutorials/model_ensembling_tutorial.md: yolov5/tutorials/model_ensembling.md
|
yolov5/tutorials/model_ensembling_tutorial.md: yolov5/tutorials/model_ensembling.md
|
||||||
yolov5/tutorials/pytorch_hub_tutorial.md: yolov5/tutorials/pytorch_hub_model_loading.md
|
yolov5/tutorials/pytorch_hub_tutorial.md: yolov5/tutorials/pytorch_hub_model_loading.md
|
||||||
|
yolov5/tutorials/yolov5_architecture_tutorial.md: yolov5/tutorials/architecture_description.md
|
||||||
|
yolov5/tutorials/multi_gpu_training_tutorial.md: yolov5/tutorials/multi_gpu_training.md
|
||||||
|
yolov5/tutorials/yolov5_pytorch_hub_tutorial.md: yolov5/tutorials/pytorch_hub_model_loading.md
|
||||||
|
yolov5/tutorials/model_export_tutorial.md: yolov5/tutorials/model_export.md
|
||||||
|
yolov5/tutorials/jetson_nano_tutorial.md: yolov5/tutorials/running_on_jetson_nano.md
|
||||||
|
yolov5/tutorials/yolov5_model_ensembling_tutorial.md: yolov5/tutorials/model_ensembling.md
|
||||||
|
reference/base_val.md: index.md
|
||||||
|
11
setup.py
11
setup.py
@ -39,8 +39,15 @@ setup(
|
|||||||
install_requires=REQUIREMENTS + PKG_REQUIREMENTS,
|
install_requires=REQUIREMENTS + PKG_REQUIREMENTS,
|
||||||
extras_require={
|
extras_require={
|
||||||
'dev': [
|
'dev': [
|
||||||
'check-manifest', 'pytest', 'pytest-cov', 'coverage', 'mkdocs-material', 'mkdocstrings[python]',
|
'check-manifest',
|
||||||
'mkdocs-redirects'],
|
'pytest',
|
||||||
|
'pytest-cov',
|
||||||
|
'coverage',
|
||||||
|
'mkdocs-material',
|
||||||
|
'mkdocstrings[python]',
|
||||||
|
'mkdocs-redirects', # for 301 redirects
|
||||||
|
'mkdocs-git-revision-date-localized-plugin', # for created/updated dates
|
||||||
|
],
|
||||||
'export': ['coremltools>=6.0', 'openvino-dev>=2022.3', 'tensorflowjs'], # automatically installs tensorflow
|
'export': ['coremltools>=6.0', 'openvino-dev>=2022.3', 'tensorflowjs'], # automatically installs tensorflow
|
||||||
},
|
},
|
||||||
classifiers=[
|
classifiers=[
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
||||||
|
|
||||||
__version__ = '8.0.91'
|
__version__ = '8.0.92'
|
||||||
|
|
||||||
from ultralytics.hub import start
|
from ultralytics.hub import start
|
||||||
from ultralytics.vit.sam import SAM
|
from ultralytics.vit.sam import SAM
|
||||||
|
@ -47,7 +47,7 @@ def on_predict_postprocess_end(predictor):
|
|||||||
tracks = predictor.trackers[i].update(det, im0s[i])
|
tracks = predictor.trackers[i].update(det, im0s[i])
|
||||||
if len(tracks) == 0:
|
if len(tracks) == 0:
|
||||||
continue
|
continue
|
||||||
idx = tracks[:, -1].tolist()
|
idx = tracks[:, -1].astype(int)
|
||||||
predictor.results[i] = predictor.results[i][idx]
|
predictor.results[i] = predictor.results[i][idx]
|
||||||
predictor.results[i].update(boxes=torch.as_tensor(tracks[:, :-1]))
|
predictor.results[i].update(boxes=torch.as_tensor(tracks[:, :-1]))
|
||||||
|
|
||||||
|
@ -82,8 +82,8 @@ class SamAutomaticMaskGenerator:
|
|||||||
memory.
|
memory.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
assert (points_per_side is None) != (point_grids is
|
assert (points_per_side is None) != (point_grids is None), \
|
||||||
None), 'Exactly one of points_per_side or point_grid must be provided.'
|
'Exactly one of points_per_side or point_grid must be provided.'
|
||||||
if points_per_side is not None:
|
if points_per_side is not None:
|
||||||
self.point_grids = build_all_layer_point_grids(
|
self.point_grids = build_all_layer_point_grids(
|
||||||
points_per_side,
|
points_per_side,
|
||||||
|
@ -115,10 +115,8 @@ class BasePredictor:
|
|||||||
im (torch.Tensor | List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list.
|
im (torch.Tensor | List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list.
|
||||||
"""
|
"""
|
||||||
if not isinstance(im, torch.Tensor):
|
if not isinstance(im, torch.Tensor):
|
||||||
auto = all(x.shape == im[0].shape for x in im) and self.model.pt
|
same_shapes = all(x.shape == im[0].shape for x in im)
|
||||||
if not auto:
|
auto = same_shapes and self.model.pt
|
||||||
LOGGER.warning(
|
|
||||||
'WARNING ⚠️ Source shapes differ. For optimal performance supply similarly-shaped sources.')
|
|
||||||
im = np.stack([LetterBox(self.imgsz, auto=auto, stride=self.model.stride)(image=x) for x in im])
|
im = np.stack([LetterBox(self.imgsz, auto=auto, stride=self.model.stride)(image=x) for x in im])
|
||||||
im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW, (n, 3, h, w)
|
im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW, (n, 3, h, w)
|
||||||
im = np.ascontiguousarray(im) # contiguous
|
im = np.ascontiguousarray(im) # contiguous
|
||||||
|
@ -259,13 +259,14 @@ def yaml_save(file='data.yaml', data=None):
|
|||||||
# Create parent directories if they don't exist
|
# Create parent directories if they don't exist
|
||||||
file.parent.mkdir(parents=True, exist_ok=True)
|
file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Convert Path objects to strings
|
||||||
|
for k, v in data.items():
|
||||||
|
if isinstance(v, Path):
|
||||||
|
dict[k] = str(v)
|
||||||
|
|
||||||
|
# Dump data to file in YAML format
|
||||||
with open(file, 'w') as f:
|
with open(file, 'w') as f:
|
||||||
# Dump data to file in YAML format, converting Path objects to strings
|
yaml.safe_dump(data, f, sort_keys=False, allow_unicode=True)
|
||||||
yaml.safe_dump({k: str(v) if isinstance(v, Path) else v
|
|
||||||
for k, v in data.items()},
|
|
||||||
f,
|
|
||||||
sort_keys=False,
|
|
||||||
allow_unicode=True)
|
|
||||||
|
|
||||||
|
|
||||||
def yaml_load(file='data.yaml', append_filename=False):
|
def yaml_load(file='data.yaml', append_filename=False):
|
||||||
@ -759,7 +760,7 @@ ENVIRONMENT = 'Colab' if is_colab() else 'Kaggle' if is_kaggle() else 'Jupyter'
|
|||||||
TESTS_RUNNING = is_pytest_running() or is_github_actions_ci()
|
TESTS_RUNNING = is_pytest_running() or is_github_actions_ci()
|
||||||
set_sentry()
|
set_sentry()
|
||||||
|
|
||||||
# OpenCV Multilanguage-friendly functions ------------------------------------------------------------------------------------
|
# OpenCV Multilanguage-friendly functions ------------------------------------------------------------------------------
|
||||||
imshow_ = cv2.imshow # copy to avoid recursion errors
|
imshow_ = cv2.imshow # copy to avoid recursion errors
|
||||||
|
|
||||||
|
|
||||||
|
@ -481,9 +481,6 @@ def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detec
|
|||||||
stage (int): Module stage within the model.
|
stage (int): Module stage within the model.
|
||||||
n (int, optional): Maximum number of feature maps to plot. Defaults to 32.
|
n (int, optional): Maximum number of feature maps to plot. Defaults to 32.
|
||||||
save_dir (Path, optional): Directory to save results. Defaults to Path('runs/detect/exp').
|
save_dir (Path, optional): Directory to save results. Defaults to Path('runs/detect/exp').
|
||||||
|
|
||||||
Returns:
|
|
||||||
None: This function does not return any value; it saves the visualization to the specified directory.
|
|
||||||
"""
|
"""
|
||||||
for m in ['Detect', 'Pose', 'Segment']:
|
for m in ['Detect', 'Pose', 'Segment']:
|
||||||
if m in module_type:
|
if m in module_type:
|
||||||
|
@ -212,7 +212,6 @@ class Loss:
|
|||||||
pred_scores.detach().sigmoid(), (pred_bboxes.detach() * stride_tensor).type(gt_bboxes.dtype),
|
pred_scores.detach().sigmoid(), (pred_bboxes.detach() * stride_tensor).type(gt_bboxes.dtype),
|
||||||
anchor_points * stride_tensor, gt_labels, gt_bboxes, mask_gt)
|
anchor_points * stride_tensor, gt_labels, gt_bboxes, mask_gt)
|
||||||
|
|
||||||
target_bboxes /= stride_tensor
|
|
||||||
target_scores_sum = max(target_scores.sum(), 1)
|
target_scores_sum = max(target_scores.sum(), 1)
|
||||||
|
|
||||||
# cls loss
|
# cls loss
|
||||||
@ -221,6 +220,7 @@ class Loss:
|
|||||||
|
|
||||||
# bbox loss
|
# bbox loss
|
||||||
if fg_mask.sum():
|
if fg_mask.sum():
|
||||||
|
target_bboxes /= stride_tensor
|
||||||
loss[0], loss[2] = self.bbox_loss(pred_distri, pred_bboxes, anchor_points, target_bboxes, target_scores,
|
loss[0], loss[2] = self.bbox_loss(pred_distri, pred_bboxes, anchor_points, target_bboxes, target_scores,
|
||||||
target_scores_sum, fg_mask)
|
target_scores_sum, fg_mask)
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user