mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-07-09 23:24:22 +08:00
ultralytics 8.0.57
Comet, AMP, Classify, Docker updates (#1601)
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
28e48be5b6
commit
ef03e6732a
@ -2,9 +2,8 @@
|
|||||||
# Builds ultralytics/ultralytics:latest image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
# Builds ultralytics/ultralytics:latest image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
||||||
# Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference
|
# Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference
|
||||||
|
|
||||||
# Start FROM NVIDIA PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
|
# Start FROM PyTorch image https://hub.docker.com/r/pytorch/pytorch
|
||||||
# FROM docker.io/pytorch/pytorch:latest
|
FROM pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime
|
||||||
FROM pytorch/pytorch:latest
|
|
||||||
|
|
||||||
# Downloads to user config dir
|
# Downloads to user config dir
|
||||||
ADD https://ultralytics.com/assets/Arial.ttf https://ultralytics.com/assets/Arial.Unicode.ttf /root/.config/Ultralytics/
|
ADD https://ultralytics.com/assets/Arial.ttf https://ultralytics.com/assets/Arial.Unicode.ttf /root/.config/Ultralytics/
|
||||||
|
@ -77,6 +77,7 @@ task.
|
|||||||
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
||||||
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
||||||
| `resume` | `False` | resume training from last checkpoint |
|
| `resume` | `False` | resume training from last checkpoint |
|
||||||
|
| `amp` | `True` | Automatic Mixed Precision (AMP) training, choices=[True, False] |
|
||||||
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||||
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
||||||
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
||||||
|
@ -14,48 +14,43 @@ YOLOv8 'yolo' CLI commands use the following syntax:
|
|||||||
|
|
||||||
Where:
|
Where:
|
||||||
|
|
||||||
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
|
- `TASK` (optional) is one of `[detect, segment, classify, pose]`. If it is not passed explicitly YOLOv8 will try to
|
||||||
|
guess
|
||||||
the `TASK` from the model type.
|
the `TASK` from the model type.
|
||||||
- `MODE` (required) is one of `[train, val, predict, export]`
|
- `MODE` (required) is one of `[train, val, predict, export, track, benchmark]`
|
||||||
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
|
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
|
||||||
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
||||||
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
|
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
|
||||||
|
|
||||||
#### Tasks
|
#### Tasks
|
||||||
|
|
||||||
YOLO models can be used for a variety of tasks, including detection, segmentation, and classification. These tasks
|
YOLO models can be used for a variety of tasks, including detection, segmentation, classification and pose. These tasks
|
||||||
differ in the type of output they produce and the specific problem they are designed to solve.
|
differ in the type of output they produce and the specific problem they are designed to solve.
|
||||||
|
|
||||||
- **Detect**: Detection tasks involve identifying and localizing objects or regions of interest in an image or video.
|
**Detect**: For identifying and localizing objects or regions of interest in an image or video.
|
||||||
YOLO models can be used for object detection tasks by predicting the bounding boxes and class labels of objects in an
|
**Segment**: For dividing an image or video into regions or pixels that correspond to different objects or classes.
|
||||||
image.
|
**Classify**: For predicting the class label of an input image.
|
||||||
- **Segment**: Segmentation tasks involve dividing an image or video into regions or pixels that correspond to
|
**Pose**: For identifying objects and estimating their keypoints in an image or video.
|
||||||
different objects or classes. YOLO models can be used for image segmentation tasks by predicting a mask or label for
|
|
||||||
each pixel in an image.
|
| Key | Value | Description |
|
||||||
- **Classify**: Classification tasks involve assigning a class label to an input, such as an image or text. YOLO
|
|--------|------------|-------------------------------------------------|
|
||||||
models can be used for image classification tasks by predicting the class label of an input image.
|
| `task` | `'detect'` | YOLO task, i.e. detect, segment, classify, pose |
|
||||||
|
|
||||||
#### Modes
|
#### Modes
|
||||||
|
|
||||||
YOLO models can be used in different modes depending on the specific problem you are trying to solve. These modes
|
YOLO models can be used in different modes depending on the specific problem you are trying to solve. These modes
|
||||||
include train, val, and predict.
|
include:
|
||||||
|
|
||||||
- **Train**: The train mode is used to train the model on a dataset. This mode is typically used during the development
|
**Train**: For training a YOLOv8 model on a custom dataset.
|
||||||
and
|
**Val**: For validating a YOLOv8 model after it has been trained.
|
||||||
testing phase of a model.
|
**Predict**: For making predictions using a trained YOLOv8 model on new images or videos.
|
||||||
- **Val**: The val mode is used to evaluate the model's performance on a validation dataset. This mode is typically used
|
**Export**: For exporting a YOLOv8 model to a format that can be used for deployment.
|
||||||
to
|
**Track**: For tracking objects in real-time using a YOLOv8 model.
|
||||||
tune the model's hyperparameters and detect overfitting.
|
**Benchmark**: For benchmarking YOLOv8 exports (ONNX, TensorRT, etc.) speed and accuracy.
|
||||||
- **Predict**: The predict mode is used to make predictions with the model on new data. This mode is typically used in
|
|
||||||
production or when deploying the model to users.
|
|
||||||
|
|
||||||
| Key | Value | Description |
|
| Key | Value | Description |
|
||||||
|----------|------------|-----------------------------------------------------------------------------------------------|
|
|--------|-----------|---------------------------------------------------------------|
|
||||||
| `task` | `'detect'` | inference task, i.e. detect, segment, or classify |
|
| `mode` | `'train'` | YOLO mode, i.e. train, val, predict, export, track, benchmark |
|
||||||
| `mode` | `'train'` | YOLO mode, i.e. train, val, predict, or export |
|
|
||||||
| `resume` | `False` | resume training from last checkpoint or custom checkpoint if passed as resume=path/to/best.pt |
|
|
||||||
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
|
||||||
| `data` | `None` | path to data file, i.e. coco128.yaml |
|
|
||||||
|
|
||||||
### Training
|
### Training
|
||||||
|
|
||||||
@ -93,6 +88,7 @@ task.
|
|||||||
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
||||||
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
||||||
| `resume` | `False` | resume training from last checkpoint |
|
| `resume` | `False` | resume training from last checkpoint |
|
||||||
|
| `amp` | `True` | Automatic Mixed Precision (AMP) training, choices=[True, False] |
|
||||||
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||||
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
||||||
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
||||||
|
@ -151,7 +151,7 @@
|
|||||||
"# Download COCO val\n",
|
"# Download COCO val\n",
|
||||||
"import torch\n",
|
"import torch\n",
|
||||||
"torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip') # download (780M - 5000 images)\n",
|
"torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip') # download (780M - 5000 images)\n",
|
||||||
"!unzip -q tmp.zip -d ../datasets && rm tmp.zip # unzip"
|
"!unzip -q tmp.zip -d datasets && rm tmp.zip # unzip"
|
||||||
],
|
],
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"outputs": []
|
"outputs": []
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Ultralytics YOLO 🚀, GPL-3.0 license
|
# Ultralytics YOLO 🚀, GPL-3.0 license
|
||||||
|
|
||||||
__version__ = '8.0.56'
|
__version__ = '8.0.57'
|
||||||
|
|
||||||
from ultralytics.yolo.engine.model import YOLO
|
from ultralytics.yolo.engine.model import YOLO
|
||||||
from ultralytics.yolo.utils.checks import check_yolo as checks
|
from ultralytics.yolo.utils.checks import check_yolo as checks
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
# Ultralytics YOLO 🚀, GPL-3.0 license
|
# Ultralytics YOLO 🚀, GPL-3.0 license
|
||||||
# Default training settings and hyperparameters for medium-augmentation COCO training
|
# Default training settings and hyperparameters for medium-augmentation COCO training
|
||||||
|
|
||||||
task: detect # inference task, i.e. detect, segment, classify
|
task: detect # YOLO task, i.e. detect, segment, classify, pose
|
||||||
mode: train # YOLO mode, i.e. train, val, predict, export
|
mode: train # YOLO mode, i.e. train, val, predict, export, track, benchmark
|
||||||
|
|
||||||
# Train settings -------------------------------------------------------------------------------------------------------
|
# Train settings -------------------------------------------------------------------------------------------------------
|
||||||
model: # path to model file, i.e. yolov8n.pt, yolov8n.yaml
|
model: # path to model file, i.e. yolov8n.pt, yolov8n.yaml
|
||||||
@ -30,6 +30,7 @@ rect: False # support rectangular training if mode='train', support rectangular
|
|||||||
cos_lr: False # use cosine learning rate scheduler
|
cos_lr: False # use cosine learning rate scheduler
|
||||||
close_mosaic: 10 # disable mosaic augmentation for final 10 epochs
|
close_mosaic: 10 # disable mosaic augmentation for final 10 epochs
|
||||||
resume: False # resume training from last checkpoint
|
resume: False # resume training from last checkpoint
|
||||||
|
amp: True # Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check
|
||||||
# Segmentation
|
# Segmentation
|
||||||
overlap_mask: True # masks should overlap during training (segment train only)
|
overlap_mask: True # masks should overlap during training (segment train only)
|
||||||
mask_ratio: 4 # mask downsample ratio (segment train only)
|
mask_ratio: 4 # mask downsample ratio (segment train only)
|
||||||
|
@ -207,12 +207,20 @@ def check_det_dataset(dataset, autodownload=True):
|
|||||||
data = yaml_load(data, append_filename=True) # dictionary
|
data = yaml_load(data, append_filename=True) # dictionary
|
||||||
|
|
||||||
# Checks
|
# Checks
|
||||||
for k in 'train', 'val', 'names':
|
for k in 'train', 'val':
|
||||||
if k not in data:
|
if k not in data:
|
||||||
raise SyntaxError(
|
raise SyntaxError(
|
||||||
emojis(f"{dataset} '{k}:' key missing ❌.\n'train', 'val' and 'names' are required in all data YAMLs."))
|
emojis(f"{dataset} '{k}:' key missing ❌.\n'train' and 'val' are required in all data YAMLs."))
|
||||||
|
if 'names' not in data and 'nc' not in data:
|
||||||
|
raise SyntaxError(emojis(f"{dataset} key missing ❌.\n either 'names' or 'nc' are required in all data YAMLs."))
|
||||||
|
if 'names' in data and 'nc' in data and len(data['names']) != data['nc']:
|
||||||
|
raise SyntaxError(emojis(f"{dataset} 'names' length {len(data['names'])} and 'nc: {data['nc']}' must match."))
|
||||||
|
if 'names' not in data:
|
||||||
|
data['names'] = [f'class_{i}' for i in range(data['nc'])]
|
||||||
|
else:
|
||||||
|
data['nc'] = len(data['names'])
|
||||||
|
|
||||||
data['names'] = check_class_names(data['names'])
|
data['names'] = check_class_names(data['names'])
|
||||||
data['nc'] = len(data['names'])
|
|
||||||
|
|
||||||
# Resolve paths
|
# Resolve paths
|
||||||
path = Path(extract_dir or data.get('path') or Path(data.get('yaml_file', '')).parent) # dataset root
|
path = Path(extract_dir or data.get('path') or Path(data.get('yaml_file', '')).parent) # dataset root
|
||||||
|
@ -142,6 +142,7 @@ class YOLO:
|
|||||||
self.task = task or guess_model_task(weights)
|
self.task = task or guess_model_task(weights)
|
||||||
self.ckpt_path = weights
|
self.ckpt_path = weights
|
||||||
self.overrides['model'] = weights
|
self.overrides['model'] = weights
|
||||||
|
self.overrides['task'] = self.task
|
||||||
|
|
||||||
def _check_is_pytorch_model(self):
|
def _check_is_pytorch_model(self):
|
||||||
"""
|
"""
|
||||||
|
@ -203,8 +203,8 @@ class BaseTrainer:
|
|||||||
self.model = self.model.to(self.device)
|
self.model = self.model.to(self.device)
|
||||||
self.set_model_attributes()
|
self.set_model_attributes()
|
||||||
# Check AMP
|
# Check AMP
|
||||||
self.amp = torch.tensor(True).to(self.device)
|
self.amp = torch.tensor(self.args.amp).to(self.device) # True or False
|
||||||
if RANK in (-1, 0): # Single-GPU and DDP
|
if self.amp and RANK in (-1, 0): # Single-GPU and DDP
|
||||||
callbacks_backup = callbacks.default_callbacks.copy() # backup callbacks as check_amp() resets them
|
callbacks_backup = callbacks.default_callbacks.copy() # backup callbacks as check_amp() resets them
|
||||||
self.amp = torch.tensor(check_amp(self.model), device=self.device)
|
self.amp = torch.tensor(check_amp(self.model), device=self.device)
|
||||||
callbacks.default_callbacks = callbacks_backup # restore callbacks
|
callbacks.default_callbacks = callbacks_backup # restore callbacks
|
||||||
|
@ -14,6 +14,7 @@ except (ImportError, AssertionError):
|
|||||||
def on_pretrain_routine_start(trainer):
|
def on_pretrain_routine_start(trainer):
|
||||||
try:
|
try:
|
||||||
experiment = comet_ml.Experiment(project_name=trainer.args.project or 'YOLOv8')
|
experiment = comet_ml.Experiment(project_name=trainer.args.project or 'YOLOv8')
|
||||||
|
experiment.set_name(trainer.args.name)
|
||||||
experiment.log_parameters(vars(trainer.args))
|
experiment.log_parameters(vars(trainer.args))
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOGGER.warning(f'WARNING ⚠️ Comet installed but not initialized correctly, not logging this run. {e}')
|
LOGGER.warning(f'WARNING ⚠️ Comet installed but not initialized correctly, not logging this run. {e}')
|
||||||
|
Loading…
x
Reference in New Issue
Block a user