mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-23 21:44:22 +08:00
ultralytics 8.0.104
bug fixes and thop
dependency removal (#2665)
Co-authored-by: Kevin Abraham <5976139+abraha2d@users.noreply.github.com> Co-authored-by: Kevin Abraham <abraha2d@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
parent
7884098857
commit
b1119d512e
@ -25,7 +25,7 @@ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt /u
|
||||
|
||||
# Install pip packages manually for TensorRT compatibility https://github.com/NVIDIA/TensorRT/issues/2567
|
||||
RUN python3 -m pip install --upgrade pip wheel
|
||||
RUN pip install --no-cache tqdm matplotlib pyyaml psutil thop pandas onnx "numpy==1.23"
|
||||
RUN pip install --no-cache tqdm matplotlib pyyaml psutil pandas onnx "numpy==1.23"
|
||||
RUN pip install --no-cache -e .
|
||||
|
||||
# Set environment variables
|
||||
|
@ -9,14 +9,6 @@ The [VisDrone Dataset](https://github.com/VisDrone/VisDrone-Dataset) is a large-
|
||||
|
||||
VisDrone is composed of 288 video clips with 261,908 frames and 10,209 static images, captured by various drone-mounted cameras. The dataset covers a wide range of aspects, including location (14 different cities across China), environment (urban and rural), objects (pedestrians, vehicles, bicycles, etc.), and density (sparse and crowded scenes). The dataset was collected using various drone platforms under different scenarios and weather and lighting conditions. These frames are manually annotated with over 2.6 million bounding boxes of targets such as pedestrians, cars, bicycles, and tricycles. Attributes like scene visibility, object class, and occlusion are also provided for better data utilization.
|
||||
|
||||
The challenge mainly focuses on five tasks:
|
||||
|
||||
1. **Task 1**: Object detection in images challenge - Detect objects of predefined categories (e.g., cars and pedestrians) from individual images taken from drones.
|
||||
2. **Task 2**: Object detection in videos challenge - Similar to Task 1, except that objects are required to be detected from videos.
|
||||
3. **Task 3**: Single-object tracking challenge - Estimate the state of a target, indicated in the first frame, in the subsequent video frames.
|
||||
4. **Task 4**: Multi-object tracking challenge - Recover the trajectories of objects in each video frame.
|
||||
5. **Task 5**: Crowd counting challenge - Count persons in each video frame.
|
||||
|
||||
## Citation
|
||||
|
||||
If you use the VisDrone dataset in your research or development work, please cite the following paper:
|
||||
|
@ -28,7 +28,7 @@ The VOC dataset is widely used for training and evaluating deep learning models
|
||||
|
||||
## Dataset YAML
|
||||
|
||||
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file should be created and maintained.
|
||||
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/VOC.yaml).
|
||||
|
||||
!!! example "ultralytics/datasets/VOC.yaml"
|
||||
|
||||
|
@ -1,7 +1,92 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover the xView Dataset, a large-scale overhead imagery dataset for object detection tasks, featuring 1M instances, 60 classes, and high-resolution images.
|
||||
---
|
||||
|
||||
# 🚧 Page Under Construction ⚒
|
||||
# xView Dataset
|
||||
|
||||
This page is currently under construction!️ 👷Please check back later for updates. 😃🔜
|
||||
The [xView](http://xviewdataset.org/) dataset is one of the largest publicly available datasets of overhead imagery, containing images from complex scenes around the world annotated using bounding boxes. The goal of the xView dataset is to accelerate progress in four computer vision frontiers:
|
||||
|
||||
1. Reduce minimum resolution for detection.
|
||||
2. Improve learning efficiency.
|
||||
3. Enable discovery of more object classes.
|
||||
4. Improve detection of fine-grained classes.
|
||||
|
||||
xView builds on the success of challenges like Common Objects in Context (COCO) and aims to leverage computer vision to analyze the growing amount of available imagery from space in order to understand the visual world in new ways and address a range of important applications.
|
||||
|
||||
## Key Features
|
||||
|
||||
- xView contains over 1 million object instances across 60 classes.
|
||||
- The dataset has a resolution of 0.3 meters, providing higher resolution imagery than most public satellite imagery datasets.
|
||||
- xView features a diverse collection of small, rare, fine-grained, and multi-type objects with bounding box annotation.
|
||||
- Comes with a pre-trained baseline model using the TensorFlow object detection API and an example for PyTorch.
|
||||
|
||||
## Dataset Structure
|
||||
|
||||
The xView dataset is composed of satellite images collected from WorldView-3 satellites at a 0.3m ground sample distance. It contains over 1 million objects across 60 classes in over 1,400 km² of imagery.
|
||||
|
||||
## Applications
|
||||
|
||||
The xView dataset is widely used for training and evaluating deep learning models for object detection in overhead imagery. The dataset's diverse set of object classes and high-resolution imagery make it a valuable resource for researchers and practitioners in the field of computer vision, especially for satellite imagery analysis.
|
||||
|
||||
## Dataset YAML
|
||||
|
||||
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/xView.yaml).
|
||||
|
||||
!!! example "ultralytics/datasets/xView.yaml"
|
||||
|
||||
```yaml
|
||||
--8<-- "ultralytics/datasets/xView.yaml"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
To train a model on the xView dataset for 100 epochs with an image size of 640, you can use the following code snippets. For a comprehensive list of available arguments, refer to the model [Training](../../modes/train.md) page.
|
||||
|
||||
!!! example "Train Example"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data='xView.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Start training from a pretrained *.pt model
|
||||
yolo detect train data=xView.yaml model=yolov8n.pt epochs=100 imgsz=640
|
||||
```
|
||||
|
||||
## Sample Data and Annotations
|
||||
|
||||
The xView dataset contains high-resolution satellite images with a diverse set of objects annotated using bounding boxes. Here are some examples of data from the dataset, along with their corresponding annotations:
|
||||
|
||||

|
||||
|
||||
- **Overhead Imagery**: This image demonstrates an example of object detection in overhead imagery, where objects are annotated with bounding boxes. The dataset provides high-resolution satellite images to facilitate the development of models for this task.
|
||||
|
||||
The example showcases the variety and complexity of the data in the xView dataset and highlights the importance of high-quality satellite imagery for object detection tasks.
|
||||
|
||||
## Citations and Acknowledgments
|
||||
|
||||
If you use the xView dataset in your research or development work, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@misc{lam2018xview,
|
||||
title={xView: Objects in Context in Overhead Imagery},
|
||||
author={Darius Lam and Richard Kuzma and Kevin McGee and Samuel Dooley and Michael Laielli and Matthew Klaric and Yaroslav Bulatov and Brendan McCord},
|
||||
year={2018},
|
||||
eprint={1802.07856},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CV}
|
||||
}
|
||||
```
|
||||
|
||||
We would like to acknowledge the [Defense Innovation Unit](https://www.diu.mil/) (DIU) and the creators of the xView dataset for their valuable contribution to the computer vision research community. For more information about the xView dataset and its creators, visit the [xView dataset website](http://xviewdataset.org/).
|
||||
|
@ -35,7 +35,7 @@ seaborn>=0.11.0
|
||||
|
||||
# Extras --------------------------------------
|
||||
psutil # system utilization
|
||||
thop>=0.1.1 # FLOPs computation
|
||||
# thop>=0.1.1 # FLOPs computation
|
||||
# ipython # interactive notebook
|
||||
# albumentations>=1.0.3
|
||||
# pycocotools>=2.0.6 # COCO mAP
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Ultralytics YOLO 🚀, AGPL-3.0 license
|
||||
|
||||
__version__ = '8.0.103'
|
||||
__version__ = '8.0.104'
|
||||
|
||||
from ultralytics.hub import start
|
||||
from ultralytics.vit.rtdetr import RTDETR
|
||||
|
@ -4,6 +4,7 @@ import requests
|
||||
|
||||
from ultralytics.hub.auth import Auth
|
||||
from ultralytics.hub.utils import PREFIX
|
||||
from ultralytics.yolo.data.utils import HUBDatasetStats
|
||||
from ultralytics.yolo.utils import LOGGER, SETTINGS, USER_CONFIG_DIR, yaml_save
|
||||
|
||||
|
||||
@ -90,5 +91,23 @@ def get_export(model_id='', format='torchscript'):
|
||||
return r.json()
|
||||
|
||||
|
||||
def check_dataset(path='', task='detect'):
|
||||
"""
|
||||
Function for error-checking HUB dataset Zip file before upload
|
||||
|
||||
Arguments
|
||||
path: Path to data.zip (with data.yaml inside data.zip)
|
||||
task: Dataset task. Options are 'detect', 'segment', 'pose', 'classify'.
|
||||
|
||||
Usage
|
||||
from ultralytics.hub import check_dataset
|
||||
check_dataset('path/to/coco8.zip', task='detect') # detect dataset
|
||||
check_dataset('path/to/coco8-seg.zip', task='segment') # segment dataset
|
||||
check_dataset('path/to/coco8-pose.zip', task='pose') # pose dataset
|
||||
"""
|
||||
HUBDatasetStats(path=path, task=task).get_json()
|
||||
LOGGER.info('Checks completed correctly ✅. Upload this dataset to https://hub.ultralytics.com/datasets/.')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
start()
|
||||
|
@ -80,7 +80,7 @@ class AutoBackend(nn.Module):
|
||||
w = str(weights[0] if isinstance(weights, list) else weights)
|
||||
nn_module = isinstance(weights, torch.nn.Module)
|
||||
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w)
|
||||
fp16 &= pt or jit or onnx or engine or nn_module # FP16
|
||||
fp16 &= pt or jit or onnx or engine or nn_module or triton # FP16
|
||||
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
|
||||
stride = 32 # default stride
|
||||
model, metadata = None, None
|
||||
|
@ -4,7 +4,6 @@ import contextlib
|
||||
from copy import deepcopy
|
||||
from pathlib import Path
|
||||
|
||||
import thop
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
|
||||
@ -18,6 +17,11 @@ from ultralytics.yolo.utils.plotting import feature_visualization
|
||||
from ultralytics.yolo.utils.torch_utils import (fuse_conv_and_bn, fuse_deconv_and_bn, initialize_weights,
|
||||
intersect_dicts, make_divisible, model_info, scale_img, time_sync)
|
||||
|
||||
try:
|
||||
import thop
|
||||
except ImportError:
|
||||
thop = None
|
||||
|
||||
|
||||
class BaseModel(nn.Module):
|
||||
"""
|
||||
|
@ -324,6 +324,7 @@ class HUBDatasetStats():
|
||||
|
||||
def __init__(self, path='coco128.yaml', task='detect', autodownload=False):
|
||||
"""Initialize class."""
|
||||
LOGGER.info(f'Starting HUB dataset checks for {path}....')
|
||||
zipped, data_dir, yaml_path = self._unzip(Path(path))
|
||||
try:
|
||||
# data = yaml_load(check_yaml(yaml_path)) # data dict
|
||||
|
@ -9,7 +9,7 @@ import os
|
||||
import subprocess
|
||||
import time
|
||||
from copy import deepcopy
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
@ -181,8 +181,6 @@ class BaseTrainer:
|
||||
# Command
|
||||
cmd, file = generate_ddp_command(world_size, self)
|
||||
try:
|
||||
LOGGER.info('Pre-caching dataset to avoid NCCL timeout before running DDP command')
|
||||
deepcopy(self)._setup_train(world_size=0)
|
||||
LOGGER.info(f'Running DDP command {cmd}')
|
||||
subprocess.run(cmd, check=True)
|
||||
except Exception as e:
|
||||
@ -197,7 +195,11 @@ class BaseTrainer:
|
||||
torch.cuda.set_device(RANK)
|
||||
self.device = torch.device('cuda', RANK)
|
||||
LOGGER.info(f'DDP settings: RANK {RANK}, WORLD_SIZE {world_size}, DEVICE {self.device}')
|
||||
dist.init_process_group('nccl' if dist.is_nccl_available() else 'gloo', rank=RANK, world_size=world_size)
|
||||
os.environ['NCCL_BLOCKING_WAIT'] = '1' # set to enforce timeout
|
||||
dist.init_process_group('nccl' if dist.is_nccl_available() else 'gloo',
|
||||
timeout=timedelta(seconds=3600),
|
||||
rank=RANK,
|
||||
world_size=world_size)
|
||||
|
||||
def _setup_train(self, world_size):
|
||||
"""
|
||||
|
@ -11,7 +11,6 @@ from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import numpy as np
|
||||
import thop
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
import torch.nn as nn
|
||||
@ -21,6 +20,11 @@ import torchvision
|
||||
from ultralytics.yolo.utils import DEFAULT_CFG_DICT, DEFAULT_CFG_KEYS, LOGGER, RANK, __version__
|
||||
from ultralytics.yolo.utils.checks import check_version
|
||||
|
||||
try:
|
||||
import thop
|
||||
except ImportError:
|
||||
thop = None
|
||||
|
||||
TORCHVISION_0_10 = check_version(torchvision.__version__, '0.10.0')
|
||||
TORCH_1_9 = check_version(torch.__version__, '1.9.0')
|
||||
TORCH_1_11 = check_version(torch.__version__, '1.11.0')
|
||||
@ -193,7 +197,7 @@ def get_flops(model, imgsz=640):
|
||||
p = next(model.parameters())
|
||||
stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride
|
||||
im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format
|
||||
flops = thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1E9 * 2 # stride GFLOPs
|
||||
flops = thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1E9 * 2 if thop else 0 # stride GFLOPs
|
||||
imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float
|
||||
flops = flops * imgsz[0] / stride * imgsz[1] / stride # 640x640 GFLOPs
|
||||
return flops
|
||||
@ -378,7 +382,7 @@ def profile(input, ops, n=10, device=None):
|
||||
m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m
|
||||
tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward
|
||||
try:
|
||||
flops = thop.profile(m, inputs=[x], verbose=False)[0] / 1E9 * 2 # GFLOPs
|
||||
flops = thop.profile(m, inputs=[x], verbose=False)[0] / 1E9 * 2 if thop else 0 # GFLOPs
|
||||
except Exception:
|
||||
flops = 0
|
||||
|
||||
|
@ -19,6 +19,8 @@ class ClassificationTrainer(BaseTrainer):
|
||||
if overrides is None:
|
||||
overrides = {}
|
||||
overrides['task'] = 'classify'
|
||||
if overrides.get('imgsz') is None:
|
||||
overrides['imgsz'] = 224
|
||||
super().__init__(cfg, overrides, _callbacks)
|
||||
|
||||
def set_model_attributes(self):
|
||||
@ -40,10 +42,6 @@ class ClassificationTrainer(BaseTrainer):
|
||||
for p in model.parameters():
|
||||
p.requires_grad = True # for training
|
||||
|
||||
# Update defaults
|
||||
if self.args.imgsz == 640:
|
||||
self.args.imgsz = 224
|
||||
|
||||
return model
|
||||
|
||||
def setup_model(self):
|
||||
|
Loading…
x
Reference in New Issue
Block a user