mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-23 05:24:22 +08:00
ultralytics 8.0.57
Comet, AMP, Classify, Docker updates (#1601)
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
28e48be5b6
commit
ef03e6732a
@ -2,9 +2,8 @@
|
||||
# Builds ultralytics/ultralytics:latest image on DockerHub https://hub.docker.com/r/ultralytics/ultralytics
|
||||
# Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference
|
||||
|
||||
# Start FROM NVIDIA PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
|
||||
# FROM docker.io/pytorch/pytorch:latest
|
||||
FROM pytorch/pytorch:latest
|
||||
# Start FROM PyTorch image https://hub.docker.com/r/pytorch/pytorch
|
||||
FROM pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime
|
||||
|
||||
# Downloads to user config dir
|
||||
ADD https://ultralytics.com/assets/Arial.ttf https://ultralytics.com/assets/Arial.Unicode.ttf /root/.config/Ultralytics/
|
||||
|
@ -77,6 +77,7 @@ task.
|
||||
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
||||
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
||||
| `resume` | `False` | resume training from last checkpoint |
|
||||
| `amp` | `True` | Automatic Mixed Precision (AMP) training, choices=[True, False] |
|
||||
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
||||
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
||||
|
@ -14,48 +14,43 @@ YOLOv8 'yolo' CLI commands use the following syntax:
|
||||
|
||||
Where:
|
||||
|
||||
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
|
||||
- `TASK` (optional) is one of `[detect, segment, classify, pose]`. If it is not passed explicitly YOLOv8 will try to
|
||||
guess
|
||||
the `TASK` from the model type.
|
||||
- `MODE` (required) is one of `[train, val, predict, export]`
|
||||
- `MODE` (required) is one of `[train, val, predict, export, track, benchmark]`
|
||||
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
|
||||
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
||||
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
|
||||
|
||||
#### Tasks
|
||||
|
||||
YOLO models can be used for a variety of tasks, including detection, segmentation, and classification. These tasks
|
||||
YOLO models can be used for a variety of tasks, including detection, segmentation, classification and pose. These tasks
|
||||
differ in the type of output they produce and the specific problem they are designed to solve.
|
||||
|
||||
- **Detect**: Detection tasks involve identifying and localizing objects or regions of interest in an image or video.
|
||||
YOLO models can be used for object detection tasks by predicting the bounding boxes and class labels of objects in an
|
||||
image.
|
||||
- **Segment**: Segmentation tasks involve dividing an image or video into regions or pixels that correspond to
|
||||
different objects or classes. YOLO models can be used for image segmentation tasks by predicting a mask or label for
|
||||
each pixel in an image.
|
||||
- **Classify**: Classification tasks involve assigning a class label to an input, such as an image or text. YOLO
|
||||
models can be used for image classification tasks by predicting the class label of an input image.
|
||||
**Detect**: For identifying and localizing objects or regions of interest in an image or video.
|
||||
**Segment**: For dividing an image or video into regions or pixels that correspond to different objects or classes.
|
||||
**Classify**: For predicting the class label of an input image.
|
||||
**Pose**: For identifying objects and estimating their keypoints in an image or video.
|
||||
|
||||
| Key | Value | Description |
|
||||
|--------|------------|-------------------------------------------------|
|
||||
| `task` | `'detect'` | YOLO task, i.e. detect, segment, classify, pose |
|
||||
|
||||
#### Modes
|
||||
|
||||
YOLO models can be used in different modes depending on the specific problem you are trying to solve. These modes
|
||||
include train, val, and predict.
|
||||
include:
|
||||
|
||||
- **Train**: The train mode is used to train the model on a dataset. This mode is typically used during the development
|
||||
and
|
||||
testing phase of a model.
|
||||
- **Val**: The val mode is used to evaluate the model's performance on a validation dataset. This mode is typically used
|
||||
to
|
||||
tune the model's hyperparameters and detect overfitting.
|
||||
- **Predict**: The predict mode is used to make predictions with the model on new data. This mode is typically used in
|
||||
production or when deploying the model to users.
|
||||
**Train**: For training a YOLOv8 model on a custom dataset.
|
||||
**Val**: For validating a YOLOv8 model after it has been trained.
|
||||
**Predict**: For making predictions using a trained YOLOv8 model on new images or videos.
|
||||
**Export**: For exporting a YOLOv8 model to a format that can be used for deployment.
|
||||
**Track**: For tracking objects in real-time using a YOLOv8 model.
|
||||
**Benchmark**: For benchmarking YOLOv8 exports (ONNX, TensorRT, etc.) speed and accuracy.
|
||||
|
||||
| Key | Value | Description |
|
||||
|----------|------------|-----------------------------------------------------------------------------------------------|
|
||||
| `task` | `'detect'` | inference task, i.e. detect, segment, or classify |
|
||||
| `mode` | `'train'` | YOLO mode, i.e. train, val, predict, or export |
|
||||
| `resume` | `False` | resume training from last checkpoint or custom checkpoint if passed as resume=path/to/best.pt |
|
||||
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| `data` | `None` | path to data file, i.e. coco128.yaml |
|
||||
| Key | Value | Description |
|
||||
|--------|-----------|---------------------------------------------------------------|
|
||||
| `mode` | `'train'` | YOLO mode, i.e. train, val, predict, export, track, benchmark |
|
||||
|
||||
### Training
|
||||
|
||||
@ -93,6 +88,7 @@ task.
|
||||
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
||||
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
||||
| `resume` | `False` | resume training from last checkpoint |
|
||||
| `amp` | `True` | Automatic Mixed Precision (AMP) training, choices=[True, False] |
|
||||
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
||||
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
||||
|
@ -151,7 +151,7 @@
|
||||
"# Download COCO val\n",
|
||||
"import torch\n",
|
||||
"torch.hub.download_url_to_file('https://ultralytics.com/assets/coco2017val.zip', 'tmp.zip') # download (780M - 5000 images)\n",
|
||||
"!unzip -q tmp.zip -d ../datasets && rm tmp.zip # unzip"
|
||||
"!unzip -q tmp.zip -d datasets && rm tmp.zip # unzip"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Ultralytics YOLO 🚀, GPL-3.0 license
|
||||
|
||||
__version__ = '8.0.56'
|
||||
__version__ = '8.0.57'
|
||||
|
||||
from ultralytics.yolo.engine.model import YOLO
|
||||
from ultralytics.yolo.utils.checks import check_yolo as checks
|
||||
|
@ -1,8 +1,8 @@
|
||||
# Ultralytics YOLO 🚀, GPL-3.0 license
|
||||
# Default training settings and hyperparameters for medium-augmentation COCO training
|
||||
|
||||
task: detect # inference task, i.e. detect, segment, classify
|
||||
mode: train # YOLO mode, i.e. train, val, predict, export
|
||||
task: detect # YOLO task, i.e. detect, segment, classify, pose
|
||||
mode: train # YOLO mode, i.e. train, val, predict, export, track, benchmark
|
||||
|
||||
# Train settings -------------------------------------------------------------------------------------------------------
|
||||
model: # path to model file, i.e. yolov8n.pt, yolov8n.yaml
|
||||
@ -30,6 +30,7 @@ rect: False # support rectangular training if mode='train', support rectangular
|
||||
cos_lr: False # use cosine learning rate scheduler
|
||||
close_mosaic: 10 # disable mosaic augmentation for final 10 epochs
|
||||
resume: False # resume training from last checkpoint
|
||||
amp: True # Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check
|
||||
# Segmentation
|
||||
overlap_mask: True # masks should overlap during training (segment train only)
|
||||
mask_ratio: 4 # mask downsample ratio (segment train only)
|
||||
|
@ -207,12 +207,20 @@ def check_det_dataset(dataset, autodownload=True):
|
||||
data = yaml_load(data, append_filename=True) # dictionary
|
||||
|
||||
# Checks
|
||||
for k in 'train', 'val', 'names':
|
||||
for k in 'train', 'val':
|
||||
if k not in data:
|
||||
raise SyntaxError(
|
||||
emojis(f"{dataset} '{k}:' key missing ❌.\n'train', 'val' and 'names' are required in all data YAMLs."))
|
||||
emojis(f"{dataset} '{k}:' key missing ❌.\n'train' and 'val' are required in all data YAMLs."))
|
||||
if 'names' not in data and 'nc' not in data:
|
||||
raise SyntaxError(emojis(f"{dataset} key missing ❌.\n either 'names' or 'nc' are required in all data YAMLs."))
|
||||
if 'names' in data and 'nc' in data and len(data['names']) != data['nc']:
|
||||
raise SyntaxError(emojis(f"{dataset} 'names' length {len(data['names'])} and 'nc: {data['nc']}' must match."))
|
||||
if 'names' not in data:
|
||||
data['names'] = [f'class_{i}' for i in range(data['nc'])]
|
||||
else:
|
||||
data['nc'] = len(data['names'])
|
||||
|
||||
data['names'] = check_class_names(data['names'])
|
||||
data['nc'] = len(data['names'])
|
||||
|
||||
# Resolve paths
|
||||
path = Path(extract_dir or data.get('path') or Path(data.get('yaml_file', '')).parent) # dataset root
|
||||
|
@ -142,6 +142,7 @@ class YOLO:
|
||||
self.task = task or guess_model_task(weights)
|
||||
self.ckpt_path = weights
|
||||
self.overrides['model'] = weights
|
||||
self.overrides['task'] = self.task
|
||||
|
||||
def _check_is_pytorch_model(self):
|
||||
"""
|
||||
|
@ -203,8 +203,8 @@ class BaseTrainer:
|
||||
self.model = self.model.to(self.device)
|
||||
self.set_model_attributes()
|
||||
# Check AMP
|
||||
self.amp = torch.tensor(True).to(self.device)
|
||||
if RANK in (-1, 0): # Single-GPU and DDP
|
||||
self.amp = torch.tensor(self.args.amp).to(self.device) # True or False
|
||||
if self.amp and RANK in (-1, 0): # Single-GPU and DDP
|
||||
callbacks_backup = callbacks.default_callbacks.copy() # backup callbacks as check_amp() resets them
|
||||
self.amp = torch.tensor(check_amp(self.model), device=self.device)
|
||||
callbacks.default_callbacks = callbacks_backup # restore callbacks
|
||||
|
@ -14,6 +14,7 @@ except (ImportError, AssertionError):
|
||||
def on_pretrain_routine_start(trainer):
|
||||
try:
|
||||
experiment = comet_ml.Experiment(project_name=trainer.args.project or 'YOLOv8')
|
||||
experiment.set_name(trainer.args.name)
|
||||
experiment.log_parameters(vars(trainer.args))
|
||||
except Exception as e:
|
||||
LOGGER.warning(f'WARNING ⚠️ Comet installed but not initialized correctly, not logging this run. {e}')
|
||||
|
Loading…
x
Reference in New Issue
Block a user