mirror of
https://github.com/THU-MIG/yolov10.git
synced 2025-05-23 05:24:22 +08:00
ultralytics 8.0.47
Docker and reformat updates (#1153)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
d4be4cb24b
commit
a58f766f94
4
.github/workflows/ci.yaml
vendored
4
.github/workflows/ci.yaml
vendored
@ -38,7 +38,7 @@ jobs:
|
||||
if [ "${{ matrix.os }}" == "macos-latest" ]; then
|
||||
pip install -e . coremltools openvino-dev tensorflow-macos --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
else
|
||||
pip install -e . coremltools openvino-dev tensorflow-cpu paddlepaddle x2paddle --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
pip install -e . coremltools openvino-dev tensorflow-cpu --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
fi
|
||||
yolo export format=tflite
|
||||
- name: Check environment
|
||||
@ -66,7 +66,7 @@ jobs:
|
||||
shell: python
|
||||
run: |
|
||||
from ultralytics.yolo.utils.benchmarks import benchmark
|
||||
benchmark(model='${{ matrix.model }}-cls.pt', imgsz=160, half=False, hard_fail=0.70)
|
||||
benchmark(model='${{ matrix.model }}-cls.pt', imgsz=160, half=False, hard_fail=0.60)
|
||||
- name: Benchmark Summary
|
||||
run: cat benchmarks.log
|
||||
|
||||
|
@ -223,10 +223,11 @@ Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自
|
||||
|
||||
## <div align="center">License</div>
|
||||
|
||||
- YOLOv8 在两种不同的 License 下可用:
|
||||
- **GPL-3.0 License**: 查看 [License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件的详细信息。
|
||||
- **企业License**:在没有 GPL-3.0 开源要求的情况下为商业产品开发提供更大的灵活性。典型用例是将 Ultralytics 软件和 AI
|
||||
模型嵌入到商业产品和应用程序中。在以下位置申请企业许可证 [Ultralytics 许可](https://ultralytics.com/license) 。
|
||||
YOLOv8 在两种不同的 License 下可用:
|
||||
|
||||
- **GPL-3.0 License**: 查看 [License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件的详细信息。
|
||||
- **企业License**:在没有 GPL-3.0 开源要求的情况下为商业产品开发提供更大的灵活性。典型用例是将 Ultralytics 软件和 AI
|
||||
模型嵌入到商业产品和应用程序中。在以下位置申请企业许可证 [Ultralytics 许可](https://ultralytics.com/license) 。
|
||||
|
||||
## <div align="center">联系我们</div>
|
||||
|
||||
|
@ -29,9 +29,8 @@ WORKDIR /usr/src/ultralytics
|
||||
RUN git clone https://github.com/ultralytics/ultralytics /usr/src/ultralytics
|
||||
|
||||
# Install pip packages
|
||||
COPY requirements.txt .
|
||||
RUN python3 -m pip install --upgrade pip wheel
|
||||
RUN pip install --no-cache ultralytics[export] albumentations comet gsutil notebook
|
||||
RUN pip install --no-cache '.[export]' albumentations comet gsutil notebook
|
||||
|
||||
# Set environment variables
|
||||
ENV OMP_NUM_THREADS=1
|
||||
|
@ -24,9 +24,8 @@ WORKDIR /usr/src/ultralytics
|
||||
RUN git clone https://github.com/ultralytics/ultralytics /usr/src/ultralytics
|
||||
|
||||
# Install pip packages
|
||||
COPY requirements.txt .
|
||||
RUN python3 -m pip install --upgrade pip wheel
|
||||
RUN pip install --no-cache ultralytics albumentations gsutil notebook
|
||||
RUN pip install --no-cache . albumentations gsutil notebook
|
||||
|
||||
# Cleanup
|
||||
ENV DEBIAN_FRONTEND teletype
|
||||
|
@ -24,9 +24,8 @@ WORKDIR /usr/src/ultralytics
|
||||
RUN git clone https://github.com/ultralytics/ultralytics /usr/src/ultralytics
|
||||
|
||||
# Install pip packages
|
||||
COPY requirements.txt .
|
||||
RUN python3 -m pip install --upgrade pip wheel
|
||||
RUN pip install --no-cache ultralytics[export] albumentations gsutil notebook \
|
||||
RUN pip install --no-cache '.[export]' albumentations gsutil notebook \
|
||||
--extra-index-url https://download.pytorch.org/whl/cpu
|
||||
|
||||
# Cleanup
|
||||
|
@ -97,8 +97,8 @@ Class reference documentation for `Results` module and its components can be fou
|
||||
|
||||
## Plotting results
|
||||
|
||||
You can use `plot()` function of `Result` object to plot results on in image object. It plots all components(boxes, masks,
|
||||
classification logits, etc) found in the results object
|
||||
You can use `plot()` function of `Result` object to plot results on in image object. It plots all components(boxes,
|
||||
masks, classification logits, etc) found in the results object
|
||||
|
||||
```python
|
||||
res = model(img)
|
||||
|
@ -42,11 +42,15 @@ Use a trained YOLOv8n/YOLOv8n-seg model to run tracker on video streams.
|
||||
|
||||
```
|
||||
|
||||
As in the above usage, we support both the detection and segmentation models for tracking and the only thing you need to do is loading the corresponding(detection or segmentation) model.
|
||||
As in the above usage, we support both the detection and segmentation models for tracking and the only thing you need to
|
||||
do is loading the corresponding (detection or segmentation) model.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Tracking
|
||||
Tracking shares the configuration with predict, i.e `conf`, `iou`, `show`. More configurations please refer to [predict page](https://docs.ultralytics.com/cfg/#prediction).
|
||||
|
||||
Tracking shares the configuration with predict, i.e `conf`, `iou`, `show`. More configurations please refer
|
||||
to [predict page](https://docs.ultralytics.com/cfg/#prediction).
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
@ -65,7 +69,10 @@ Tracking shares the configuration with predict, i.e `conf`, `iou`, `show`. More
|
||||
```
|
||||
|
||||
### Tracker
|
||||
We also support using a modified tracker config file, just copy a config file i.e `custom_tracker.yaml` from [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) and modify any configurations(expect the `tracker_type`) you need to.
|
||||
|
||||
We also support using a modified tracker config file, just copy a config file i.e `custom_tracker.yaml`
|
||||
from [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) and modify
|
||||
any configurations(expect the `tracker_type`) you need to.
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
@ -82,5 +89,7 @@ We also support using a modified tracker config file, just copy a config file i.
|
||||
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml'
|
||||
|
||||
```
|
||||
Please refer to [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) page.
|
||||
|
||||
Please refer to [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg)
|
||||
page.
|
||||
|
||||
|
@ -37,7 +37,6 @@ seaborn>=0.11.0
|
||||
# Extras --------------------------------------
|
||||
psutil # system utilization
|
||||
thop>=0.1.1 # FLOPs computation
|
||||
wheel>=0.38.0 # Snyk vulnerability fix
|
||||
# ipython # interactive notebook
|
||||
# albumentations>=1.0.3
|
||||
# pycocotools>=2.0.6 # COCO mAP
|
||||
|
12
setup.cfg
12
setup.cfg
@ -25,17 +25,19 @@ verbose = 2
|
||||
# https://pep8.readthedocs.io/en/latest/intro.html#error-codes
|
||||
format = pylint
|
||||
# see: https://www.flake8rules.com/
|
||||
ignore = E731,F405,E402,F401,W504,E127,E231,E501,F403
|
||||
ignore = E731,F405,E402,W504,E501
|
||||
# E731: Do not assign a lambda expression, use a def
|
||||
# F405: name may be undefined, or defined from star imports: module
|
||||
# E402: module level import not at top of file
|
||||
# F401: module imported but unused
|
||||
# W504: line break after binary operator
|
||||
# E127: continuation line over-indented for visual indent
|
||||
# E231: missing whitespace after ‘,’, ‘;’, or ‘:’
|
||||
# E501: line too long
|
||||
# removed:
|
||||
# F401: module imported but unused
|
||||
# E231: missing whitespace after ‘,’, ‘;’, or ‘:’
|
||||
# E127: continuation line over-indented for visual indent
|
||||
# F403: ‘from module import *’ used; unable to detect undefined names
|
||||
|
||||
|
||||
[isort]
|
||||
# https://pycqa.github.io/isort/docs/configuration/options.html
|
||||
line_length = 120
|
||||
@ -48,7 +50,7 @@ spaces_before_comment = 2
|
||||
COLUMN_LIMIT = 120
|
||||
COALESCE_BRACKETS = True
|
||||
SPACES_AROUND_POWER_OPERATOR = True
|
||||
SPACE_BETWEEN_ENDING_COMMA_AND_CLOSING_BRACKET = False
|
||||
SPACE_BETWEEN_ENDING_COMMA_AND_CLOSING_BRACKET = True
|
||||
SPLIT_BEFORE_CLOSING_BRACKET = False
|
||||
SPLIT_BEFORE_FIRST_ARGUMENT = False
|
||||
# EACH_DICT_ENTRY_ON_SEPARATE_LINE = False
|
||||
|
2
setup.py
2
setup.py
@ -59,7 +59,7 @@ setup(
|
||||
'Topic :: Scientific/Engineering :: Image Recognition',
|
||||
'Operating System :: POSIX :: Linux',
|
||||
'Operating System :: MacOS',
|
||||
'Operating System :: Microsoft :: Windows',],
|
||||
'Operating System :: Microsoft :: Windows', ],
|
||||
keywords='machine-learning, deep-learning, vision, ML, DL, AI, YOLO, YOLOv3, YOLOv5, YOLOv8, HUB, Ultralytics',
|
||||
entry_points={
|
||||
'console_scripts': ['yolo = ultralytics.yolo.cfg:entrypoint', 'ultralytics = ultralytics.yolo.cfg:entrypoint']})
|
||||
|
@ -22,7 +22,7 @@ def test_special_modes():
|
||||
|
||||
# Train checks ---------------------------------------------------------------------------------------------------------
|
||||
def test_train_det():
|
||||
run(f'yolo train detect model={CFG}.yaml data=coco8.yaml imgsz=32 epochs=1')
|
||||
run(f'yolo train detect model={CFG}.yaml data=coco8.yaml imgsz=32 epochs=1 v5loader')
|
||||
|
||||
|
||||
def test_train_seg():
|
||||
@ -48,7 +48,7 @@ def test_val_classify():
|
||||
|
||||
# Predict checks -------------------------------------------------------------------------------------------------------
|
||||
def test_predict_detect():
|
||||
run(f"yolo predict model={MODEL}.pt source={ROOT / 'assets'} imgsz=32 save")
|
||||
run(f"yolo predict model={MODEL}.pt source={ROOT / 'assets'} imgsz=32 save save_crop save_txt")
|
||||
if checks.check_online():
|
||||
run(f'yolo predict model={MODEL}.pt source=https://ultralytics.com/images/bus.jpg imgsz=32')
|
||||
run(f'yolo predict model={MODEL}.pt source=https://ultralytics.com/assets/decelera_landscape_min.mov imgsz=32')
|
||||
|
@ -162,9 +162,8 @@ def test_workflow():
|
||||
|
||||
|
||||
def test_predict_callback_and_setup():
|
||||
|
||||
def on_predict_batch_end(predictor):
|
||||
# results -> List[batch_size]
|
||||
# test callback addition for prediction
|
||||
def on_predict_batch_end(predictor): # results -> List[batch_size]
|
||||
path, _, im0s, _, _ = predictor.batch
|
||||
# print('on_predict_batch_end', im0s[0].shape)
|
||||
im0s = im0s if isinstance(im0s, list) else [im0s]
|
||||
|
@ -1,8 +1,8 @@
|
||||
# Ultralytics YOLO 🚀, GPL-3.0 license
|
||||
|
||||
__version__ = '8.0.46'
|
||||
__version__ = '8.0.47'
|
||||
|
||||
from ultralytics.yolo.engine.model import YOLO
|
||||
from ultralytics.yolo.utils.checks import check_yolo as checks
|
||||
|
||||
__all__ = ['__version__', 'YOLO', 'checks'] # allow simpler import
|
||||
__all__ = '__version__', 'YOLO', 'checks' # allow simpler import
|
||||
|
@ -154,11 +154,12 @@ class Traces:
|
||||
'python': platform.python_version(),
|
||||
'release': __version__,
|
||||
'environment': ENVIRONMENT}
|
||||
self.enabled = SETTINGS['sync'] and \
|
||||
RANK in {-1, 0} and \
|
||||
check_online() and \
|
||||
not TESTS_RUNNING and \
|
||||
(is_pip_package() or get_git_origin_url() == 'https://github.com/ultralytics/ultralytics.git')
|
||||
self.enabled = \
|
||||
SETTINGS['sync'] and \
|
||||
RANK in {-1, 0} and \
|
||||
check_online() and \
|
||||
not TESTS_RUNNING and \
|
||||
(is_pip_package() or get_git_origin_url() == 'https://github.com/ultralytics/ultralytics.git')
|
||||
|
||||
def __call__(self, cfg, all_keys=False, traces_sample_rate=1.0):
|
||||
"""
|
||||
|
@ -136,7 +136,7 @@ class AutoBackend(nn.Module):
|
||||
batch_dim = get_batch(network)
|
||||
if batch_dim.is_static:
|
||||
batch_size = batch_dim.get_length()
|
||||
executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for Intel NCS2
|
||||
executable_network = ie.compile_model(network, device_name='CPU') # device_name="MYRIAD" for NCS2
|
||||
elif engine: # TensorRT
|
||||
LOGGER.info(f'Loading {w} for TensorRT inference...')
|
||||
import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
|
||||
@ -176,6 +176,8 @@ class AutoBackend(nn.Module):
|
||||
LOGGER.info(f'Loading {w} for CoreML inference...')
|
||||
import coremltools as ct
|
||||
model = ct.models.MLModel(w)
|
||||
names, stride, task = (model.user_defined_metadata.get(k) for k in ('names', 'stride', 'task'))
|
||||
names, stride = eval(names), int(stride)
|
||||
elif saved_model: # TF SavedModel
|
||||
LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...')
|
||||
import tensorflow as tf
|
||||
@ -185,18 +187,13 @@ class AutoBackend(nn.Module):
|
||||
LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...')
|
||||
import tensorflow as tf
|
||||
|
||||
from ultralytics.yolo.engine.exporter import gd_outputs
|
||||
|
||||
def wrap_frozen_graph(gd, inputs, outputs):
|
||||
x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=''), []) # wrapped
|
||||
ge = x.graph.as_graph_element
|
||||
return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs))
|
||||
|
||||
def gd_outputs(gd):
|
||||
name_list, input_list = [], []
|
||||
for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef
|
||||
name_list.append(node.name)
|
||||
input_list.extend(node.input)
|
||||
return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp'))
|
||||
|
||||
gd = tf.Graph().as_graph_def() # TF GraphDef
|
||||
with open(w, 'rb') as f:
|
||||
gd.ParseFromString(f.read())
|
||||
@ -319,10 +316,17 @@ class AutoBackend(nn.Module):
|
||||
self.context.execute_v2(list(self.binding_addrs.values()))
|
||||
y = [self.bindings[x].data for x in sorted(self.output_names)]
|
||||
elif self.coreml: # CoreML
|
||||
im = im.cpu().numpy()
|
||||
im = Image.fromarray((im[0] * 255).astype('uint8'))
|
||||
im = im[0].cpu().numpy()
|
||||
if self.task == 'classify':
|
||||
from ultralytics.yolo.data.utils import IMAGENET_MEAN, IMAGENET_STD
|
||||
|
||||
# im_pil = Image.fromarray(((im / 6 + 0.5) * 255).astype('uint8'))
|
||||
for i in range(3):
|
||||
im[..., i] *= IMAGENET_STD[i]
|
||||
im[..., i] += IMAGENET_MEAN[i]
|
||||
im_pil = Image.fromarray((im * 255).astype('uint8'))
|
||||
# im = im.resize((192, 320), Image.ANTIALIAS)
|
||||
y = self.model.predict({'image': im}) # coordinates are xywh normalized
|
||||
y = self.model.predict({'image': im_pil}) # coordinates are xywh normalized
|
||||
if 'confidence' in y:
|
||||
box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
|
||||
conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
|
||||
|
@ -11,7 +11,7 @@ import torch.nn as nn
|
||||
from ultralytics.nn.modules import (C1, C2, C3, C3TR, SPP, SPPF, Bottleneck, BottleneckCSP, C2f, C3Ghost, C3x, Classify,
|
||||
Concat, Conv, ConvTranspose, Detect, DWConv, DWConvTranspose2d, Ensemble, Focus,
|
||||
GhostBottleneck, GhostConv, Segment)
|
||||
from ultralytics.yolo.utils import DEFAULT_CFG_DICT, DEFAULT_CFG_KEYS, LOGGER, RANK, colorstr, yaml_load
|
||||
from ultralytics.yolo.utils import DEFAULT_CFG_DICT, DEFAULT_CFG_KEYS, LOGGER, RANK, colorstr, emojis, yaml_load
|
||||
from ultralytics.yolo.utils.checks import check_requirements, check_yaml
|
||||
from ultralytics.yolo.utils.torch_utils import (fuse_conv_and_bn, fuse_deconv_and_bn, initialize_weights,
|
||||
intersect_dicts, make_divisible, model_info, scale_img, time_sync)
|
||||
@ -76,7 +76,7 @@ class BaseModel(nn.Module):
|
||||
None
|
||||
"""
|
||||
c = m == self.model[-1] # is final layer, copy input as inplace fix
|
||||
o = thop.profile(m, inputs=(x.clone() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
|
||||
o = thop.profile(m, inputs=[x.clone() if c else x], verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
|
||||
t = time_sync()
|
||||
for _ in range(10):
|
||||
m(x.clone() if c else x)
|
||||
@ -339,14 +339,20 @@ def torch_safe_load(weight):
|
||||
file = attempt_download_asset(weight) # search online if missing locally
|
||||
try:
|
||||
return torch.load(file, map_location='cpu'), file # load
|
||||
except ModuleNotFoundError as e:
|
||||
if e.name == 'omegaconf': # e.name is missing module name
|
||||
LOGGER.warning(f'WARNING ⚠️ {weight} requires {e.name}, which is not in ultralytics requirements.'
|
||||
f'\nAutoInstall will run now for {e.name} but this feature will be removed in the future.'
|
||||
f'\nRecommend fixes are to train a new model using updated ultralytics package or to '
|
||||
f'download updated models from https://github.com/ultralytics/assets/releases/tag/v0.0.0')
|
||||
if e.name != 'models':
|
||||
check_requirements(e.name) # install missing module
|
||||
except ModuleNotFoundError as e: # e.name is missing module name
|
||||
if e.name == 'models':
|
||||
raise TypeError(
|
||||
emojis(f'ERROR ❌️ {weight} appears to be an Ultralytics YOLOv5 model originally trained '
|
||||
f'with https://github.com/ultralytics/yolov5.\nThis model is NOT forwards compatible with '
|
||||
f'YOLOv8 at https://github.com/ultralytics/ultralytics.'
|
||||
f"\nRecommend fixes are to train a new model using the latest 'ultralytics' package or to "
|
||||
f"run a command with an official YOLOv8 model, i.e. 'yolo predict model=yolov8n.pt'")) from e
|
||||
LOGGER.warning(f"WARNING ⚠️ {weight} appears to require '{e.name}', which is not in ultralytics requirements."
|
||||
f"\nAutoInstall will run now for '{e.name}' but this feature will be removed in the future."
|
||||
f"\nRecommend fixes are to train a new model using the latest 'ultralytics' package or to "
|
||||
f"run a command with an official YOLOv8 model, i.e. 'yolo predict model=yolov8n.pt'")
|
||||
check_requirements(e.name) # install missing module
|
||||
|
||||
return torch.load(file, map_location='cpu'), file # load
|
||||
|
||||
|
||||
@ -437,22 +443,21 @@ def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)
|
||||
args[j] = eval(a) if isinstance(a, str) else a # eval strings
|
||||
|
||||
n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
|
||||
if m in {
|
||||
Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus,
|
||||
BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x}:
|
||||
if m in (Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus,
|
||||
BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x):
|
||||
c1, c2 = ch[f], args[0]
|
||||
if c2 != nc: # if c2 not equal to number of classes (i.e. for Classify() output)
|
||||
c2 = make_divisible(c2 * gw, 8)
|
||||
|
||||
args = [c1, c2, *args[1:]]
|
||||
if m in {BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, C3x}:
|
||||
if m in (BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, C3x):
|
||||
args.insert(2, n) # number of repeats
|
||||
n = 1
|
||||
elif m is nn.BatchNorm2d:
|
||||
args = [ch[f]]
|
||||
elif m is Concat:
|
||||
c2 = sum(ch[x] for x in f)
|
||||
elif m in {Detect, Segment}:
|
||||
elif m in (Detect, Segment):
|
||||
args.append([ch[x] for x in f])
|
||||
if m is Segment:
|
||||
args[2] = make_divisible(args[2] * gw, 8)
|
||||
@ -490,11 +495,11 @@ def guess_model_task(model):
|
||||
def cfg2task(cfg):
|
||||
# Guess from YAML dictionary
|
||||
m = cfg['head'][-1][-2].lower() # output module name
|
||||
if m in ['classify', 'classifier', 'cls', 'fc']:
|
||||
if m in ('classify', 'classifier', 'cls', 'fc'):
|
||||
return 'classify'
|
||||
if m in ['detect']:
|
||||
if m == 'detect':
|
||||
return 'detect'
|
||||
if m in ['segment']:
|
||||
if m == 'segment':
|
||||
return 'segment'
|
||||
|
||||
# Guess from model cfg
|
||||
|
@ -2,3 +2,5 @@
|
||||
|
||||
from .track import register_tracker
|
||||
from .trackers import BOTSORT, BYTETracker
|
||||
|
||||
__all__ = 'register_tracker', 'BOTSORT', 'BYTETracker' # allow simpler import
|
||||
|
@ -2,3 +2,5 @@
|
||||
|
||||
from .bot_sort import BOTSORT
|
||||
from .byte_tracker import BYTETracker
|
||||
|
||||
__all__ = 'BOTSORT', 'BYTETracker' # allow simpler import
|
||||
|
@ -213,7 +213,7 @@ class GMC:
|
||||
prev_pt = np.array(self.prevKeyPoints[m.queryIdx].pt, dtype=np.int_)
|
||||
curr_pt = np.array(keypoints[m.trainIdx].pt, dtype=np.int_)
|
||||
curr_pt[0] += W
|
||||
color = np.random.randint(0, 255, (3,))
|
||||
color = np.random.randint(0, 255, 3)
|
||||
color = (int(color[0]), int(color[1]), int(color[2]))
|
||||
|
||||
matches_img = cv2.line(matches_img, prev_pt, curr_pt, tuple(color), 1, cv2.LINE_AA)
|
||||
|
@ -2,4 +2,4 @@
|
||||
|
||||
from . import v8
|
||||
|
||||
__all__ = ['v8']
|
||||
__all__ = 'v8', # tuple or list
|
||||
|
@ -269,6 +269,11 @@ def entrypoint(debug=''):
|
||||
checks.check_yolo()
|
||||
return
|
||||
|
||||
# Task
|
||||
task = overrides.get('task')
|
||||
if task and task not in TASKS:
|
||||
raise ValueError(f"Invalid 'task={task}'. Valid tasks are {TASKS}.\n{CLI_HELP_MSG}")
|
||||
|
||||
# Model
|
||||
model = overrides.pop('model', DEFAULT_CFG.model)
|
||||
if model is None:
|
||||
@ -276,15 +281,11 @@ def entrypoint(debug=''):
|
||||
LOGGER.warning(f"WARNING ⚠️ 'model' is missing. Using default 'model={model}'.")
|
||||
from ultralytics.yolo.engine.model import YOLO
|
||||
overrides['model'] = model
|
||||
model = YOLO(model)
|
||||
model = YOLO(model, task=task)
|
||||
|
||||
# Task
|
||||
task = overrides.get('task', model.task)
|
||||
if task is not None:
|
||||
if task not in TASKS:
|
||||
raise ValueError(f"Invalid 'task={task}'. Valid tasks are {TASKS}.\n{CLI_HELP_MSG}")
|
||||
else:
|
||||
model.task = task
|
||||
# Task Update
|
||||
task = task or model.task
|
||||
overrides['task'] = task
|
||||
|
||||
# Mode
|
||||
if mode in {'predict', 'track'} and 'source' not in overrides:
|
||||
|
@ -5,12 +5,5 @@ from .build import build_classification_dataloader, build_dataloader, load_infer
|
||||
from .dataset import ClassificationDataset, SemanticDataset, YOLODataset
|
||||
from .dataset_wrappers import MixAndRectDataset
|
||||
|
||||
__all__ = [
|
||||
'BaseDataset',
|
||||
'ClassificationDataset',
|
||||
'MixAndRectDataset',
|
||||
'SemanticDataset',
|
||||
'YOLODataset',
|
||||
'build_classification_dataloader',
|
||||
'build_dataloader',
|
||||
'load_inference_source',]
|
||||
__all__ = ('BaseDataset', 'ClassificationDataset', 'MixAndRectDataset', 'SemanticDataset', 'YOLODataset',
|
||||
'build_classification_dataloader', 'build_dataloader', 'load_inference_source')
|
||||
|
@ -564,7 +564,7 @@ class Albumentations:
|
||||
A.CLAHE(p=0.01),
|
||||
A.RandomBrightnessContrast(p=0.0),
|
||||
A.RandomGamma(p=0.0),
|
||||
A.ImageCompression(quality_lower=75, p=0.0),] # transforms
|
||||
A.ImageCompression(quality_lower=75, p=0.0)] # transforms
|
||||
self.transform = A.Compose(T, bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
|
||||
|
||||
LOGGER.info(prefix + ', '.join(f'{x}'.replace('always_apply=False, ', '') for x in T if x.p))
|
||||
@ -671,14 +671,14 @@ def v8_transforms(dataset, imgsz, hyp):
|
||||
shear=hyp.shear,
|
||||
perspective=hyp.perspective,
|
||||
pre_transform=LetterBox(new_shape=(imgsz, imgsz)),
|
||||
),])
|
||||
)])
|
||||
return Compose([
|
||||
pre_transform,
|
||||
MixUp(dataset, pre_transform=pre_transform, p=hyp.mixup),
|
||||
Albumentations(p=1.0),
|
||||
RandomHSV(hgain=hyp.hsv_h, sgain=hyp.hsv_s, vgain=hyp.hsv_v),
|
||||
RandomFlip(direction='vertical', p=hyp.flipud),
|
||||
RandomFlip(direction='horizontal', p=hyp.fliplr),]) # transforms
|
||||
RandomFlip(direction='horizontal', p=hyp.fliplr)]) # transforms
|
||||
|
||||
|
||||
# Classification augmentations -----------------------------------------------------------------------------------------
|
||||
@ -719,8 +719,8 @@ def classify_albumentations(
|
||||
if vflip > 0:
|
||||
T += [A.VerticalFlip(p=vflip)]
|
||||
if jitter > 0:
|
||||
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, saturation, 0 hue
|
||||
T += [A.ColorJitter(*color_jitter, 0)]
|
||||
jitter = float(jitter)
|
||||
T += [A.ColorJitter(jitter, jitter, jitter, 0)] # brightness, contrast, saturation, 0 hue
|
||||
else: # Use fixed crop for eval set (reproducibility)
|
||||
T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)]
|
||||
T += [A.Normalize(mean=mean, std=std), ToTensorV2()] # Normalize and convert to Tensor
|
||||
|
@ -24,20 +24,18 @@ class BaseDataset(Dataset):
|
||||
label_path (str): label path, this can also be an ann_file or other custom label path.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
img_path,
|
||||
imgsz=640,
|
||||
cache=False,
|
||||
augment=True,
|
||||
hyp=None,
|
||||
prefix='',
|
||||
rect=False,
|
||||
batch_size=None,
|
||||
stride=32,
|
||||
pad=0.5,
|
||||
single_cls=False,
|
||||
):
|
||||
def __init__(self,
|
||||
img_path,
|
||||
imgsz=640,
|
||||
cache=False,
|
||||
augment=True,
|
||||
hyp=None,
|
||||
prefix='',
|
||||
rect=False,
|
||||
batch_size=None,
|
||||
stride=32,
|
||||
pad=0.5,
|
||||
single_cls=False):
|
||||
super().__init__()
|
||||
self.img_path = img_path
|
||||
self.imgsz = imgsz
|
||||
|
@ -335,8 +335,8 @@ def classify_albumentations(
|
||||
if vflip > 0:
|
||||
T += [A.VerticalFlip(p=vflip)]
|
||||
if jitter > 0:
|
||||
color_jitter = (float(jitter),) * 3 # repeat value for brightness, contrast, satuaration, 0 hue
|
||||
T += [A.ColorJitter(*color_jitter, 0)]
|
||||
jitter = float(jitter)
|
||||
T += [A.ColorJitter(jitter, jitter, jitter, 0)] # brightness, contrast, satuaration, 0 hue
|
||||
else: # Use fixed crop for eval set (reproducibility)
|
||||
T = [A.SmallestMaxSize(max_size=size), A.CenterCrop(height=size, width=size)]
|
||||
T += [A.Normalize(mean=mean, std=std), ToTensorV2()] # Normalize and convert to Tensor
|
||||
|
@ -4,13 +4,16 @@ from itertools import repeat
|
||||
from multiprocessing.pool import ThreadPool
|
||||
from pathlib import Path
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
import torch
|
||||
import torchvision
|
||||
from tqdm import tqdm
|
||||
|
||||
from ..utils import NUM_THREADS, TQDM_BAR_FORMAT, is_dir_writeable
|
||||
from .augment import *
|
||||
from .augment import Compose, Format, Instances, LetterBox, classify_albumentations, classify_transforms, v8_transforms
|
||||
from .base import BaseDataset
|
||||
from .utils import HELP_URL, LOCAL_RANK, get_hash, img2label_paths, verify_image_label
|
||||
from .utils import HELP_URL, LOCAL_RANK, LOGGER, get_hash, img2label_paths, verify_image_label
|
||||
|
||||
|
||||
class YOLODataset(BaseDataset):
|
||||
|
@ -50,7 +50,6 @@ TensorFlow.js:
|
||||
import json
|
||||
import os
|
||||
import platform
|
||||
import re
|
||||
import subprocess
|
||||
import time
|
||||
import warnings
|
||||
@ -90,9 +89,9 @@ def export_formats():
|
||||
['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True],
|
||||
['TensorFlow GraphDef', 'pb', '.pb', True, True],
|
||||
['TensorFlow Lite', 'tflite', '.tflite', True, False],
|
||||
['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False],
|
||||
['TensorFlow.js', 'tfjs', '_web_model', False, False],
|
||||
['PaddlePaddle', 'paddle', '_paddle_model', True, True],]
|
||||
['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', True, False],
|
||||
['TensorFlow.js', 'tfjs', '_web_model', True, False],
|
||||
['PaddlePaddle', 'paddle', '_paddle_model', True, True], ]
|
||||
return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])
|
||||
|
||||
|
||||
@ -100,6 +99,15 @@ EXPORT_FORMATS_LIST = list(export_formats()['Argument'][1:])
|
||||
EXPORT_FORMATS_TABLE = str(export_formats())
|
||||
|
||||
|
||||
def gd_outputs(gd):
|
||||
# TensorFlow GraphDef model output node names
|
||||
name_list, input_list = [], []
|
||||
for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef
|
||||
name_list.append(node.name)
|
||||
input_list.extend(node.input)
|
||||
return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp'))
|
||||
|
||||
|
||||
def try_export(inner_func):
|
||||
# YOLOv8 export decorator, i..e @try_export
|
||||
inner_args = get_default_args(inner_func)
|
||||
@ -164,10 +172,10 @@ class Exporter:
|
||||
# Checks
|
||||
model.names = check_class_names(model.names)
|
||||
self.imgsz = check_imgsz(self.args.imgsz, stride=model.stride, min_dim=2) # check image size
|
||||
if model.task == 'classify':
|
||||
self.args.nms = self.args.agnostic_nms = False
|
||||
if self.args.optimize:
|
||||
assert self.device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu'
|
||||
if edgetpu and not LINUX:
|
||||
raise SystemError('Edge TPU export only supported on Linux. See https://coral.ai/docs/edgetpu/compiler/')
|
||||
|
||||
# Input
|
||||
im = torch.zeros(self.args.batch, 3, *self.imgsz).to(self.device)
|
||||
@ -208,7 +216,7 @@ class Exporter:
|
||||
self.file = file
|
||||
self.output_shape = tuple(y.shape) if isinstance(y, torch.Tensor) else tuple(tuple(x.shape) for x in y)
|
||||
self.pretty_name = self.file.stem.replace('yolo', 'YOLO')
|
||||
description = f'Ultralytics {self.pretty_name} model' + f'trained on {Path(self.args.data).name}' \
|
||||
description = f'Ultralytics {self.pretty_name} model ' + f'trained on {Path(self.args.data).name}' \
|
||||
if self.args.data else '(untrained)'
|
||||
self.metadata = {
|
||||
'description': description,
|
||||
@ -239,8 +247,7 @@ class Exporter:
|
||||
'Please consider contributing to the effort if you have TF expertise. Thank you!')
|
||||
nms = False
|
||||
self.args.int8 |= edgetpu
|
||||
f[5], s_model = self._export_saved_model(nms=nms or self.args.agnostic_nms or tfjs,
|
||||
agnostic_nms=self.args.agnostic_nms or tfjs)
|
||||
f[5], s_model = self._export_saved_model()
|
||||
if pb or tfjs: # pb prerequisite to tfjs
|
||||
f[6], _ = self._export_pb(s_model)
|
||||
if tflite:
|
||||
@ -386,7 +393,7 @@ class Exporter:
|
||||
check_requirements('coremltools>=6.0')
|
||||
import coremltools as ct # noqa
|
||||
|
||||
class iOSModel(torch.nn.Module):
|
||||
class iOSDetectModel(torch.nn.Module):
|
||||
# Wrap an Ultralytics YOLO model for iOS export
|
||||
def __init__(self, model, im):
|
||||
super().__init__()
|
||||
@ -405,29 +412,36 @@ class Exporter:
|
||||
LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
|
||||
f = self.file.with_suffix('.mlmodel')
|
||||
|
||||
bias = [0.0, 0.0, 0.0]
|
||||
scale = 1 / 255
|
||||
classifier_config = None
|
||||
if self.model.task == 'classify':
|
||||
bias = [-x for x in IMAGENET_MEAN]
|
||||
scale = 1 / 255 / (sum(IMAGENET_STD) / 3)
|
||||
classifier_config = ct.ClassifierConfig(list(self.model.names.values())) if self.args.nms else None
|
||||
else:
|
||||
bias = [0.0, 0.0, 0.0]
|
||||
scale = 1 / 255
|
||||
classifier_config = None
|
||||
model = iOSModel(self.model, self.im).eval() if self.args.nms else self.model
|
||||
ts = torch.jit.trace(model, self.im, strict=False) # TorchScript model
|
||||
model = self.model
|
||||
elif self.model.task == 'detect':
|
||||
model = iOSDetectModel(self.model, self.im) if self.args.nms else self.model
|
||||
elif self.model.task == 'segment':
|
||||
# TODO CoreML Segmentation model pipelining
|
||||
model = self.model
|
||||
|
||||
ts = torch.jit.trace(model.eval(), self.im, strict=False) # TorchScript model
|
||||
ct_model = ct.convert(ts,
|
||||
inputs=[ct.ImageType('image', shape=self.im.shape, scale=scale, bias=bias)],
|
||||
classifier_config=classifier_config)
|
||||
bits, mode = (8, 'kmeans_lut') if self.args.int8 else (16, 'linear') if self.args.half else (32, None)
|
||||
if bits < 32:
|
||||
ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)
|
||||
if self.args.nms:
|
||||
if self.args.nms and self.model.task == 'detect':
|
||||
ct_model = self._pipeline_coreml(ct_model)
|
||||
|
||||
ct_model.short_description = self.metadata['description']
|
||||
ct_model.author = self.metadata['author']
|
||||
ct_model.license = self.metadata['license']
|
||||
ct_model.version = self.metadata['version']
|
||||
m = self.metadata # metadata dict
|
||||
ct_model.short_description = m['description']
|
||||
ct_model.author = m['author']
|
||||
ct_model.license = m['license']
|
||||
ct_model.version = m['version']
|
||||
ct_model.user_defined_metadata.update({k: str(v) for k, v in m.items() if k in ('stride', 'task', 'names')})
|
||||
ct_model.save(str(f))
|
||||
return f, ct_model
|
||||
|
||||
@ -497,14 +511,7 @@ class Exporter:
|
||||
return f, None
|
||||
|
||||
@try_export
|
||||
def _export_saved_model(self,
|
||||
nms=False,
|
||||
agnostic_nms=False,
|
||||
topk_per_class=100,
|
||||
topk_all=100,
|
||||
iou_thres=0.45,
|
||||
conf_thres=0.25,
|
||||
prefix=colorstr('TensorFlow SavedModel:')):
|
||||
def _export_saved_model(self, prefix=colorstr('TensorFlow SavedModel:')):
|
||||
|
||||
# YOLOv8 TensorFlow SavedModel export
|
||||
try:
|
||||
@ -562,6 +569,9 @@ class Exporter:
|
||||
@try_export
|
||||
def _export_tflite(self, keras_model, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):
|
||||
# YOLOv8 TensorFlow Lite export
|
||||
import tensorflow as tf # noqa
|
||||
|
||||
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
|
||||
saved_model = Path(str(self.file).replace(self.file.suffix, '_saved_model'))
|
||||
if self.args.int8:
|
||||
f = saved_model / (self.file.stem + 'yolov8n_integer_quant.tflite') # fp32 in/out
|
||||
@ -572,9 +582,6 @@ class Exporter:
|
||||
return str(f), None # noqa
|
||||
|
||||
# OLD VERSION BELOW ---------------------------------------------------------------
|
||||
import tensorflow as tf # noqa
|
||||
|
||||
LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
|
||||
batch_size, ch, *imgsz = list(self.im.shape) # BCHW
|
||||
f = str(self.file).replace(self.file.suffix, '-fp16.tflite')
|
||||
|
||||
@ -619,7 +626,9 @@ class Exporter:
|
||||
LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}')
|
||||
sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0 # sudo installed on system
|
||||
for c in (
|
||||
'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -',
|
||||
# 'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -', # errors
|
||||
'wget --no-check-certificate -q -O - https://packages.cloud.google.com/apt/doc/apt-key.gpg | '
|
||||
'sudo apt-key add -',
|
||||
'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | ' # no comma
|
||||
'sudo tee /etc/apt/sources.list.d/coral-edgetpu.list',
|
||||
'sudo apt-get update',
|
||||
@ -639,30 +648,36 @@ class Exporter:
|
||||
def _export_tfjs(self, prefix=colorstr('TensorFlow.js:')):
|
||||
# YOLOv8 TensorFlow.js export
|
||||
check_requirements('tensorflowjs')
|
||||
import tensorflow as tf
|
||||
import tensorflowjs as tfjs # noqa
|
||||
|
||||
LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
|
||||
f = str(self.file).replace(self.file.suffix, '_web_model') # js dir
|
||||
f_pb = self.file.with_suffix('.pb') # *.pb path
|
||||
f_json = Path(f) / 'model.json' # *.json path
|
||||
|
||||
cmd = f'tensorflowjs_converter --input_format=tf_frozen_model ' \
|
||||
f'--output_node_names=Identity,Identity_1,Identity_2,Identity_3 {f_pb} {f}'
|
||||
gd = tf.Graph().as_graph_def() # TF GraphDef
|
||||
with open(f_pb, 'rb') as file:
|
||||
gd.ParseFromString(file.read())
|
||||
outputs = ','.join(gd_outputs(gd))
|
||||
LOGGER.info(f'\n{prefix} output node names: {outputs}')
|
||||
|
||||
cmd = f'tensorflowjs_converter --input_format=tf_frozen_model --output_node_names={outputs} {f_pb} {f}'
|
||||
subprocess.run(cmd.split(), check=True)
|
||||
|
||||
with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
|
||||
subst = re.sub(
|
||||
r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
|
||||
r'"Identity.?.?": {"name": "Identity.?.?"}, '
|
||||
r'"Identity.?.?": {"name": "Identity.?.?"}, '
|
||||
r'"Identity.?.?": {"name": "Identity.?.?"}}}',
|
||||
r'{"outputs": {"Identity": {"name": "Identity"}, '
|
||||
r'"Identity_1": {"name": "Identity_1"}, '
|
||||
r'"Identity_2": {"name": "Identity_2"}, '
|
||||
r'"Identity_3": {"name": "Identity_3"}}}',
|
||||
f_json.read_text(),
|
||||
)
|
||||
j.write(subst)
|
||||
# f_json = Path(f) / 'model.json' # *.json path
|
||||
# with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
|
||||
# subst = re.sub(
|
||||
# r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
|
||||
# r'"Identity.?.?": {"name": "Identity.?.?"}, '
|
||||
# r'"Identity.?.?": {"name": "Identity.?.?"}, '
|
||||
# r'"Identity.?.?": {"name": "Identity.?.?"}}}',
|
||||
# r'{"outputs": {"Identity": {"name": "Identity"}, '
|
||||
# r'"Identity_1": {"name": "Identity_1"}, '
|
||||
# r'"Identity_2": {"name": "Identity_2"}, '
|
||||
# r'"Identity_3": {"name": "Identity_3"}}}',
|
||||
# f_json.read_text(),
|
||||
# )
|
||||
# j.write(subst)
|
||||
yaml_save(Path(f) / 'metadata.yaml', self.metadata) # add metadata.yaml
|
||||
return f, None
|
||||
|
||||
@ -680,7 +695,7 @@ class Exporter:
|
||||
model_meta.license = self.metadata['license']
|
||||
|
||||
# Label file
|
||||
tmp_file = file.parent / 'temp_meta.txt'
|
||||
tmp_file = Path(file).parent / 'temp_meta.txt'
|
||||
with open(tmp_file, 'w') as f:
|
||||
f.write(str(self.metadata))
|
||||
|
||||
@ -718,7 +733,7 @@ class Exporter:
|
||||
b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
|
||||
metadata_buf = b.Output()
|
||||
|
||||
populator = _metadata.MetadataPopulator.with_model_file(file)
|
||||
populator = _metadata.MetadataPopulator.with_model_file(str(file))
|
||||
populator.load_metadata_buffer(metadata_buf)
|
||||
populator.load_associated_files([str(tmp_file)])
|
||||
populator.populate()
|
||||
|
@ -2,7 +2,6 @@
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from ultralytics import yolo # noqa
|
||||
from ultralytics.nn.tasks import (ClassificationModel, DetectionModel, SegmentationModel, attempt_load_one_weight,
|
||||
@ -68,7 +67,7 @@ class YOLO:
|
||||
list(ultralytics.yolo.engine.results.Results): The prediction results.
|
||||
"""
|
||||
|
||||
def __init__(self, model='yolov8n.pt') -> None:
|
||||
def __init__(self, model='yolov8n.pt', task=None) -> None:
|
||||
"""
|
||||
Initializes the YOLO model.
|
||||
|
||||
@ -91,9 +90,9 @@ class YOLO:
|
||||
if not suffix and Path(model).stem in GITHUB_ASSET_STEMS:
|
||||
model, suffix = Path(model).with_suffix('.pt'), '.pt' # add suffix, i.e. yolov8n -> yolov8n.pt
|
||||
if suffix == '.yaml':
|
||||
self._new(model)
|
||||
self._new(model, task)
|
||||
else:
|
||||
self._load(model)
|
||||
self._load(model, task)
|
||||
|
||||
def __call__(self, source=None, stream=False, **kwargs):
|
||||
return self.predict(source, stream, **kwargs)
|
||||
@ -102,17 +101,18 @@ class YOLO:
|
||||
name = self.__class__.__name__
|
||||
raise AttributeError(f"'{name}' object has no attribute '{attr}'. See valid attributes below.\n{self.__doc__}")
|
||||
|
||||
def _new(self, cfg: str, verbose=True):
|
||||
def _new(self, cfg: str, task=None, verbose=True):
|
||||
"""
|
||||
Initializes a new model and infers the task type from the model definitions.
|
||||
|
||||
Args:
|
||||
cfg (str): model configuration file
|
||||
task (str) or (None): model task
|
||||
verbose (bool): display model info on load
|
||||
"""
|
||||
self.cfg = check_yaml(cfg) # check YAML
|
||||
cfg_dict = yaml_load(self.cfg, append_filename=True) # model dict
|
||||
self.task = guess_model_task(cfg_dict)
|
||||
self.task = task or guess_model_task(cfg_dict)
|
||||
self.model = TASK_MAP[self.task][0](cfg_dict, verbose=verbose and RANK == -1) # build model
|
||||
self.overrides['model'] = self.cfg
|
||||
|
||||
@ -121,12 +121,13 @@ class YOLO:
|
||||
self.model.args = {k: v for k, v in args.items() if k in DEFAULT_CFG_KEYS} # attach args to model
|
||||
self.model.task = self.task
|
||||
|
||||
def _load(self, weights: str, task=''):
|
||||
def _load(self, weights: str, task=None):
|
||||
"""
|
||||
Initializes a new model and infers the task type from the model head.
|
||||
|
||||
Args:
|
||||
weights (str): model checkpoint to be loaded
|
||||
task (str) or (None): model task
|
||||
"""
|
||||
suffix = Path(weights).suffix
|
||||
if suffix == '.pt':
|
||||
@ -137,7 +138,7 @@ class YOLO:
|
||||
else:
|
||||
weights = check_file(weights)
|
||||
self.model, self.ckpt = weights, None
|
||||
self.task = guess_model_task(weights)
|
||||
self.task = task or guess_model_task(weights)
|
||||
self.ckpt_path = weights
|
||||
self.overrides['model'] = weights
|
||||
|
||||
|
@ -32,7 +32,6 @@ from collections import defaultdict
|
||||
from pathlib import Path
|
||||
|
||||
import cv2
|
||||
import torch
|
||||
|
||||
from ultralytics.nn.autobackend import AutoBackend
|
||||
from ultralytics.yolo.cfg import get_cfg
|
||||
|
@ -242,7 +242,7 @@ class BaseTrainer:
|
||||
metric_keys = self.validator.metrics.keys + self.label_loss_items(prefix='val')
|
||||
self.metrics = dict(zip(metric_keys, [0] * len(metric_keys))) # TODO: init metrics for plot_results()?
|
||||
self.ema = ModelEMA(self.model)
|
||||
if self.args.plots:
|
||||
if self.args.plots and not self.args.v5loader:
|
||||
self.plot_training_labels()
|
||||
self.resume_training(ckpt)
|
||||
self.scheduler.last_epoch = self.start_epoch - 1 # do not move
|
||||
|
@ -18,7 +18,6 @@ from typing import Union
|
||||
import cv2
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import requests
|
||||
import torch
|
||||
import yaml
|
||||
|
||||
@ -517,10 +516,7 @@ def set_sentry():
|
||||
((is_pip_package() and not is_git_dir()) or
|
||||
(get_git_origin_url() == 'https://github.com/ultralytics/ultralytics.git' and get_git_branch() == 'main')):
|
||||
|
||||
import hashlib
|
||||
|
||||
import sentry_sdk # noqa
|
||||
|
||||
sentry_sdk.init(
|
||||
dsn='https://f805855f03bb4363bc1e16cb7d87b654@o4504521589325824.ingest.sentry.io/4504521592406016',
|
||||
debug=False,
|
||||
|
@ -30,14 +30,14 @@ import pandas as pd
|
||||
|
||||
from ultralytics import YOLO
|
||||
from ultralytics.yolo.engine.exporter import export_formats
|
||||
from ultralytics.yolo.utils import LOGGER, ROOT, SETTINGS
|
||||
from ultralytics.yolo.utils import LINUX, LOGGER, ROOT, SETTINGS
|
||||
from ultralytics.yolo.utils.checks import check_yolo
|
||||
from ultralytics.yolo.utils.downloads import download
|
||||
from ultralytics.yolo.utils.files import file_size
|
||||
from ultralytics.yolo.utils.torch_utils import select_device
|
||||
|
||||
|
||||
def benchmark(model=Path(SETTINGS['weights_dir']) / 'yolov8n.pt', imgsz=160, half=False, device='cpu', hard_fail=0.30):
|
||||
def benchmark(model=Path(SETTINGS['weights_dir']) / 'yolov8n.pt', imgsz=160, half=False, device='cpu', hard_fail=False):
|
||||
device = select_device(device, verbose=False)
|
||||
if isinstance(model, (str, Path)):
|
||||
model = YOLO(model)
|
||||
@ -45,11 +45,10 @@ def benchmark(model=Path(SETTINGS['weights_dir']) / 'yolov8n.pt', imgsz=160, hal
|
||||
y = []
|
||||
t0 = time.time()
|
||||
for i, (name, format, suffix, cpu, gpu) in export_formats().iterrows(): # index, (name, format, suffix, CPU, GPU)
|
||||
emoji = '❌' # indicates export failure
|
||||
try:
|
||||
assert i not in (9, 10), 'inference not supported' # Edge TPU and TF.js are unsupported
|
||||
assert i != 5 or platform.system() == 'Darwin', 'inference only supported on macOS>=10.13' # CoreML
|
||||
assert i != 11 or model.task != 'classify', 'paddle-classify bug'
|
||||
|
||||
assert i != 11, 'paddle exports coming soon'
|
||||
assert i != 9 or LINUX, 'Edge TPU export only supported on Linux'
|
||||
if 'cpu' in device.type:
|
||||
assert cpu, 'inference not supported on CPU'
|
||||
if 'cuda' in device.type:
|
||||
@ -61,13 +60,16 @@ def benchmark(model=Path(SETTINGS['weights_dir']) / 'yolov8n.pt', imgsz=160, hal
|
||||
export = model # PyTorch format
|
||||
else:
|
||||
filename = model.export(imgsz=imgsz, format=format, half=half, device=device) # all others
|
||||
export = YOLO(filename)
|
||||
export = YOLO(filename, task=model.task)
|
||||
assert suffix in str(filename), 'export failed'
|
||||
emoji = '❎' # indicates export succeeded
|
||||
|
||||
# Predict
|
||||
assert i not in (9, 10), 'inference not supported' # Edge TPU and TF.js are unsupported
|
||||
assert i != 5 or platform.system() == 'Darwin', 'inference only supported on macOS>=10.13' # CoreML
|
||||
if not (ROOT / 'assets/bus.jpg').exists():
|
||||
download(url='https://ultralytics.com/images/bus.jpg', dir=ROOT / 'assets')
|
||||
export.predict(ROOT / 'assets/bus.jpg', imgsz=imgsz, device=device, half=half) # test
|
||||
export.predict(ROOT / 'assets/bus.jpg', imgsz=imgsz, device=device, half=half)
|
||||
|
||||
# Validate
|
||||
if model.task == 'detect':
|
||||
@ -84,17 +86,16 @@ def benchmark(model=Path(SETTINGS['weights_dir']) / 'yolov8n.pt', imgsz=160, hal
|
||||
if hard_fail:
|
||||
assert type(e) is AssertionError, f'Benchmark hard_fail for {name}: {e}'
|
||||
LOGGER.warning(f'ERROR ❌️ Benchmark failure for {name}: {e}')
|
||||
y.append([name, '❌', None, None, None]) # mAP, t_inference
|
||||
y.append([name, emoji, None, None, None]) # mAP, t_inference
|
||||
|
||||
# Print results
|
||||
check_yolo(device=device) # print system info
|
||||
c = ['Format', 'Status❔', 'Size (MB)', key, 'Inference time (ms/im)']
|
||||
df = pd.DataFrame(y, columns=c)
|
||||
df = pd.DataFrame(y, columns=['Format', 'Status❔', 'Size (MB)', key, 'Inference time (ms/im)'])
|
||||
|
||||
name = Path(model.ckpt_path).name
|
||||
s = f'\nBenchmarks complete for {name} on {data} at imgsz={imgsz} ({time.time() - t0:.2f}s)\n{df}\n'
|
||||
LOGGER.info(s)
|
||||
with open('benchmarks.log', 'a') as f:
|
||||
with open('benchmarks.log', 'a', errors='ignore', encoding='utf-8') as f:
|
||||
f.write(s)
|
||||
|
||||
if hard_fail and isinstance(hard_fail, float):
|
||||
|
@ -1,5 +1,3 @@
|
||||
from .base import add_integration_callbacks, default_callbacks
|
||||
|
||||
__all__ = [
|
||||
'add_integration_callbacks',
|
||||
'default_callbacks',]
|
||||
__all__ = 'add_integration_callbacks', 'default_callbacks'
|
||||
|
@ -137,7 +137,6 @@ def check_latest_pypi_version(package_name='ultralytics'):
|
||||
def check_pip_update():
|
||||
from ultralytics import __version__
|
||||
latest = check_latest_pypi_version()
|
||||
latest = '9.0.0'
|
||||
if pkg.parse_version(__version__) < pkg.parse_version(latest):
|
||||
LOGGER.info(f'New https://pypi.org/project/ultralytics/{latest} available 😃 '
|
||||
f"Update with 'pip install -U ultralytics'")
|
||||
@ -239,7 +238,7 @@ def check_requirements(requirements=ROOT.parent / 'requirements.txt', exclude=()
|
||||
LOGGER.warning(f'{prefix} ❌ {e}')
|
||||
|
||||
|
||||
def check_suffix(file='yolov8n.pt', suffix=('.pt',), msg=''):
|
||||
def check_suffix(file='yolov8n.pt', suffix='.pt', msg=''):
|
||||
# Check file(s) for acceptable suffix
|
||||
if file and suffix:
|
||||
if isinstance(suffix, str):
|
||||
|
@ -10,9 +10,8 @@ import numpy as np
|
||||
from .ops import ltwh2xywh, ltwh2xyxy, resample_segments, xywh2ltwh, xywh2xyxy, xyxy2ltwh, xyxy2xywh
|
||||
|
||||
|
||||
# From PyTorch internals
|
||||
def _ntuple(n):
|
||||
|
||||
# From PyTorch internals
|
||||
def parse(x):
|
||||
return x if isinstance(x, abc.Iterable) else tuple(repeat(x, n))
|
||||
|
||||
@ -26,7 +25,7 @@ to_4tuple = _ntuple(4)
|
||||
# `ltwh` means left top and width, height(coco format)
|
||||
_formats = ['xyxy', 'xywh', 'ltwh']
|
||||
|
||||
__all__ = ['Bboxes']
|
||||
__all__ = 'Bboxes', # tuple or list
|
||||
|
||||
|
||||
class Bboxes:
|
||||
|
@ -207,8 +207,7 @@ def plot_labels(boxes, cls, names=(), save_dir=Path('')):
|
||||
|
||||
def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, BGR=False, save=True):
|
||||
# Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop
|
||||
xyxy = torch.Tensor(xyxy).view(-1, 4)
|
||||
b = xyxy2xywh(xyxy) # boxes
|
||||
b = xyxy2xywh(xyxy.view(-1, 4)) # boxes
|
||||
if square:
|
||||
b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square
|
||||
b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad
|
||||
|
@ -195,7 +195,7 @@ def get_flops(model, imgsz=640):
|
||||
p = next(model.parameters())
|
||||
stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride
|
||||
im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format
|
||||
flops = thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs
|
||||
flops = thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1E9 * 2 # stride GFLOPs
|
||||
imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float
|
||||
flops = flops * imgsz[0] / stride * imgsz[1] / stride # 640x640 GFLOPs
|
||||
return flops
|
||||
@ -374,7 +374,7 @@ def profile(input, ops, n=10, device=None):
|
||||
m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m
|
||||
tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward
|
||||
try:
|
||||
flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs
|
||||
flops = thop.profile(m, inputs=[x], verbose=False)[0] / 1E9 * 2 # GFLOPs
|
||||
except Exception:
|
||||
flops = 0
|
||||
|
||||
|
@ -2,4 +2,4 @@
|
||||
|
||||
from ultralytics.yolo.v8 import classify, detect, segment
|
||||
|
||||
__all__ = ['classify', 'segment', 'detect']
|
||||
__all__ = 'classify', 'segment', 'detect'
|
||||
|
@ -4,4 +4,4 @@ from ultralytics.yolo.v8.classify.predict import ClassificationPredictor, predic
|
||||
from ultralytics.yolo.v8.classify.train import ClassificationTrainer, train
|
||||
from ultralytics.yolo.v8.classify.val import ClassificationValidator, val
|
||||
|
||||
__all__ = ['ClassificationPredictor', 'predict', 'ClassificationTrainer', 'train', 'ClassificationValidator', 'val']
|
||||
__all__ = 'ClassificationPredictor', 'predict', 'ClassificationTrainer', 'train', 'ClassificationValidator', 'val'
|
||||
|
@ -4,4 +4,4 @@ from .predict import DetectionPredictor, predict
|
||||
from .train import DetectionTrainer, train
|
||||
from .val import DetectionValidator, val
|
||||
|
||||
__all__ = ['DetectionPredictor', 'predict', 'DetectionTrainer', 'train', 'DetectionValidator', 'val']
|
||||
__all__ = 'DetectionPredictor', 'predict', 'DetectionTrainer', 'train', 'DetectionValidator', 'val'
|
||||
|
@ -4,4 +4,4 @@ from .predict import SegmentationPredictor, predict
|
||||
from .train import SegmentationTrainer, train
|
||||
from .val import SegmentationValidator, val
|
||||
|
||||
__all__ = ['SegmentationPredictor', 'predict', 'SegmentationTrainer', 'train', 'SegmentationValidator', 'val']
|
||||
__all__ = 'SegmentationPredictor', 'predict', 'SegmentationTrainer', 'train', 'SegmentationValidator', 'val'
|
||||
|
Loading…
x
Reference in New Issue
Block a user