From 42bcf8c47f0bd9eef6da58409f036009e525c7ee Mon Sep 17 00:00:00 2001
From: Glenn Jocher
Date: Mon, 27 Nov 2023 17:46:29 +0100
Subject: [PATCH] Add missing HTML image alt tags (#6611)
Signed-off-by: Glenn Jocher
---
README.md | 48 +++++++++----------
README.zh-CN.md | 48 +++++++++----------
docs/ar/index.md | 14 +++---
docs/de/index.md | 14 +++---
docs/de/tasks/pose.md | 16 +++----
docs/en/hub/app/android.md | 14 +++---
docs/en/hub/app/index.md | 16 +++----
docs/en/hub/app/ios.md | 14 +++---
docs/en/hub/datasets.md | 2 +-
docs/en/hub/index.md | 12 ++---
docs/en/index.md | 14 +++---
docs/en/integrations/openvino.md | 8 ++--
docs/en/integrations/roboflow.md | 46 +++++++++---------
.../docker_image_quickstart_tutorial.md | 2 +-
docs/en/yolov5/index.md | 12 ++---
.../tutorials/architecture_description.md | 12 ++---
.../tutorials/clearml_logging_integration.md | 16 +++----
.../tutorials/comet_logging_integration.md | 2 +-
docs/en/yolov5/tutorials/model_ensembling.md | 2 +-
docs/en/yolov5/tutorials/model_export.md | 4 +-
.../neural_magic_pruning_quantization.md | 6 +--
.../tutorials/pytorch_hub_model_loading.md | 3 +-
.../roboflow_datasets_integration.md | 2 +-
.../tutorials/running_on_jetson_nano.md | 4 +-
.../tutorials/test_time_augmentation.md | 2 +-
docs/en/yolov5/tutorials/train_custom_data.md | 26 +++++-----
.../transfer_learning_with_frozen_layers.md | 10 ++--
docs/es/index.md | 14 +++---
docs/fr/index.md | 12 ++---
docs/hi/index.md | 14 +++---
docs/ja/index.md | 14 +++---
docs/ko/index.md | 14 +++---
docs/pt/index.md | 14 +++---
docs/ru/index.md | 14 +++---
docs/update_translations.py | 22 +++++++++
docs/zh/index.md | 14 +++---
examples/YOLOv8-Region-Counter/readme.md | 7 ++-
.../README.md | 2 +-
ultralytics/trackers/README.md | 2 +-
39 files changed, 267 insertions(+), 245 deletions(-)
diff --git a/README.md b/README.md
index d7c2b959..819f9823 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
@@ -25,21 +25,21 @@ We hope that the resources here will help you get the most out of YOLOv8. Please
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://ultralytics.com/license).
-
+
@@ -209,22 +209,22 @@ Our key integrations with leading AI platforms extend the functionality of Ultra
-
+
- 
-

+

+
- 
-

+

+
- 
-

+

+
- 
+
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
@@ -245,7 +245,7 @@ We love your input! YOLOv5 and YOLOv8 would not be possible without help from ou
-
+
## License
@@ -261,16 +261,16 @@ For Ultralytics bug reports and feature requests please visit [GitHub Issues](ht
diff --git a/README.zh-CN.md b/README.zh-CN.md
index 386d63c7..84f264ce 100644
--- a/README.zh-CN.md
+++ b/README.zh-CN.md
@@ -1,7 +1,7 @@
@@ -25,21 +25,21 @@
如需申请企业许可,请在 [Ultralytics Licensing](https://ultralytics.com/license) 处填写表格
-
+
@@ -208,22 +208,22 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
-
+
- 
-

+

+
- 
-

+

+
- 
-

+

+
- 
+
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
@@ -244,7 +244,7 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
-
+
## 许可证
@@ -260,16 +260,16 @@ Ultralytics 提供两种许可证选项以适应各种使用场景:
diff --git a/docs/ar/index.md b/docs/ar/index.md
index 97e6332f..211f6673 100644
--- a/docs/ar/index.md
+++ b/docs/ar/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics، YOLOv8، كشف الكائنات، تجزئة الصور
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics، YOLOv8، كشف الكائنات، تجزئة الصور
-
+
diff --git a/docs/de/index.md b/docs/de/index.md
index 367a7fc9..1216d92e 100644
--- a/docs/de/index.md
+++ b/docs/de/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, Objekterkennung, Bildsegmentierung, maschinelles
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, Objekterkennung, Bildsegmentierung, maschinelles
-
+
diff --git a/docs/de/tasks/pose.md b/docs/de/tasks/pose.md
index 4e2ad3c5..14d0f25b 100644
--- a/docs/de/tasks/pose.md
+++ b/docs/de/tasks/pose.md
@@ -33,14 +33,14 @@ Hier werden vortrainierte YOLOv8 Pose-Modelle gezeigt. Erkennungs-, Segmentierun
[Modelle](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) werden automatisch aus der neuesten Ultralytics-[Veröffentlichung](https://github.com/ultralytics/assets/releases) bei erstmaliger Verwendung heruntergeladen.
-| Modell | Größe
(Pixel) | mAPpose
50-95 | mAPpose
50 | Geschwindigkeit
CPU ONNX
(ms) | Geschwindigkeit
A100 TensorRT
(ms) | Parameter
(M) | FLOPs
(B) |
-|------------------------------------------------------------------------------------------------------|------------------------|------------------------|---------------------|--------------------------------------------|-------------------------------------------------|------------------------|--------------------|
-| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50,4 | 80,1 | 131,8 | 1,18 | 3,3 | 9,2 |
-| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60,0 | 86,2 | 233,2 | 1,42 | 11,6 | 30,2 |
-| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65,0 | 88,8 | 456,3 | 2,00 | 26,4 | 81,0 |
-| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67,6 | 90,0 | 784,5 | 2,59 | 44,4 | 168,6 |
-| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69,2 | 90,2 | 1607,1 | 3,73 | 69,4 | 263,2 |
-| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71,6 | 91,2 | 4088,7 | 10,04 | 99,1 | 1066,4 |
+| Modell | Größe
(Pixel) | mAPpose
50-95 | mAPpose
50 | Geschwindigkeit
CPU ONNX
(ms) | Geschwindigkeit
A100 TensorRT
(ms) | Parameter
(M) | FLOPs
(B) |
+|------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|--------------------|------------------------------------------|-----------------------------------------------|-----------------------|-------------------|
+| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50,4 | 80,1 | 131,8 | 1,18 | 3,3 | 9,2 |
+| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60,0 | 86,2 | 233,2 | 1,42 | 11,6 | 30,2 |
+| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65,0 | 88,8 | 456,3 | 2,00 | 26,4 | 81,0 |
+| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67,6 | 90,0 | 784,5 | 2,59 | 44,4 | 168,6 |
+| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69,2 | 90,2 | 1607,1 | 3,73 | 69,4 | 263,2 |
+| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71,6 | 91,2 | 4088,7 | 10,04 | 99,1 | 1066,4 |
- **mAPval** Werte gelten für ein einzelnes Modell mit einfacher Skala auf dem [COCO Keypoints val2017](http://cocodataset.org)-Datensatz.
Zu reproduzieren mit `yolo val pose data=coco-pose.yaml device=0`.
diff --git a/docs/en/hub/app/android.md b/docs/en/hub/app/android.md
index fa7cd855..0bff31c1 100644
--- a/docs/en/hub/app/android.md
+++ b/docs/en/hub/app/android.md
@@ -11,22 +11,22 @@ keywords: Ultralytics, Android App, real-time object detection, YOLO models, Ten
The Ultralytics Android App is a powerful tool that allows you to run YOLO models directly on your Android device for real-time object detection. This app utilizes TensorFlow Lite for model optimization and various hardware delegates for acceleration, enabling fast and efficient object detection.
diff --git a/docs/en/hub/app/index.md b/docs/en/hub/app/index.md
index 464c5ffc..ef962e83 100644
--- a/docs/en/hub/app/index.md
+++ b/docs/en/hub/app/index.md
@@ -11,24 +11,24 @@ keywords: Ultralytics, HUB App, YOLOv5, YOLOv8, mobile AI, real-time object dete
Welcome to the Ultralytics HUB App! We are excited to introduce this powerful mobile app that allows you to run YOLOv5 and YOLOv8 models directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) and [Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) devices. With the HUB App, you can utilize hardware acceleration features like Apple's Neural Engine (ANE) or Android GPU and Neural Network API (NNAPI) delegates to achieve impressive performance on your mobile device.
diff --git a/docs/en/hub/app/ios.md b/docs/en/hub/app/ios.md
index 82a4e956..ac939c90 100644
--- a/docs/en/hub/app/ios.md
+++ b/docs/en/hub/app/ios.md
@@ -11,22 +11,22 @@ keywords: Ultralytics, iOS app, object detection, YOLO models, real time, Apple
The Ultralytics iOS App is a powerful tool that allows you to run YOLO models directly on your iPhone or iPad for real-time object detection. This app utilizes the Apple Neural Engine and Core ML for model optimization and acceleration, enabling fast and efficient object detection.
diff --git a/docs/en/hub/datasets.md b/docs/en/hub/datasets.md
index dfc3338d..1ab7c45f 100644
--- a/docs/en/hub/datasets.md
+++ b/docs/en/hub/datasets.md
@@ -25,7 +25,7 @@ zip -r coco8.zip coco8
You can download our [COCO8](https://github.com/ultralytics/hub/blob/main/example_datasets/coco8.zip) example dataset and unzip it to see exactly how to structure your dataset.
-
+
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format.
diff --git a/docs/en/hub/index.md b/docs/en/hub/index.md
index 9eedc724..69209539 100644
--- a/docs/en/hub/index.md
+++ b/docs/en/hub/index.md
@@ -11,17 +11,17 @@ keywords: Ultralytics HUB, YOLOv5, YOLOv8, model training, model deployment, pre

-

+

-

+

-

+

-

+

-

+

-

+
diff --git a/docs/en/index.md b/docs/en/index.md
index 5126e78a..c69388b9 100644
--- a/docs/en/index.md
+++ b/docs/en/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, object detection, image segmentation, machine lea

-

+

-

+

-

+

-

+

-

+

-

+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, object detection, image segmentation, machine lea
-

+
diff --git a/docs/en/integrations/openvino.md b/docs/en/integrations/openvino.md
index c5f509b0..6f552ff1 100644
--- a/docs/en/integrations/openvino.md
+++ b/docs/en/integrations/openvino.md
@@ -114,7 +114,7 @@ The Intel® Data Center GPU Flex Series is a versatile and robust solution desig
Benchmarks below run on Intel® Data Center GPU Flex 170 at FP32 precision.
-

+
| Model | Format | Status | Size (MB) | mAP50-95(B) | Inference time (ms/im) |
@@ -153,7 +153,7 @@ Early reviews have praised the Arc™ series, particularly the integrated A770M
Benchmarks below run on Intel® Arc 770 GPU at FP32 precision.
-

+
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
@@ -188,7 +188,7 @@ Notably, Xeon® CPUs deliver high compute density and scalability, making them i
Benchmarks below run on 4th Gen Intel® Xeon® Scalable CPU at FP32 precision.
-

+
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
@@ -221,7 +221,7 @@ The Intel® Core® series is a range of high-performance processors by Intel. Th
Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
-

+
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
diff --git a/docs/en/integrations/roboflow.md b/docs/en/integrations/roboflow.md
index 2f015900..f640918a 100644
--- a/docs/en/integrations/roboflow.md
+++ b/docs/en/integrations/roboflow.md
@@ -27,20 +27,20 @@ Roboflow provides two services that can help you collect data for YOLOv8 models:
Universe is an online repository with over 250,000 vision datasets totalling over 100 million images.
-
+
With a [free Roboflow account](https://app.roboflow.com/?ref=ultralytics), you can export any dataset available on Universe. To export a dataset, click the "Download this Dataset" button on any dataset.
-
+
For YOLOv8, select "YOLOv8" as the export format:
-
+
Universe also has a page that aggregates all [public fine-tuned YOLOv8 models uploaded to Roboflow](https://universe.roboflow.com/search?q=model:yolov8). You can use this page to explore pre-trained models you can use for testing or [for automated data labeling](https://docs.roboflow.com/annotate/use-roboflow-annotate/model-assisted-labeling) or to prototype with [Roboflow inference](https://roboflow.com/inference?ref=ultralytics).
@@ -54,13 +54,13 @@ If you want to gather images yourself, try [Collect](https://github.com/roboflow
To label data for a YOLOv8 object detection, instance segmentation, or classification model, first create a project in Roboflow.
-
+
Next, upload your images, and any pre-existing annotations you have from other tools ([using one of the 40+ supported import formats](https://roboflow.com/formats?ref=ultralytics)), into Roboflow.
-
+
Select the batch of images you have uploaded on the Annotate page to which you are taken after uploading images. Then, click "Start Annotating" to label images.
@@ -68,7 +68,7 @@ Select the batch of images you have uploaded on the Annotate page to which you a
To label with bounding boxes, press the `B` key on your keyboard or click the box icon in the sidebar. Click on a point where you want to start your bounding box, then drag to create the box:
-
+
A pop-up will appear asking you to select a class for your annotation once you have created an annotation.
@@ -80,7 +80,7 @@ Roboflow offers a SAM-based label assistant with which you can label images fast
To use the label assistant, click the cursor icon in the sidebar, SAM will be loaded for use in your project.
-
+
Hover over any object in the image and SAM will recommend an annotation. You can hover to find the right place to annotate, then click to create your annotation. To amend your annotation to be more or less specific, you can click inside or outside of the annotation SAM has created on the document.
@@ -88,7 +88,7 @@ Hover over any object in the image and SAM will recommend an annotation. You can
You can also add tags to images from the Tags panel in the sidebar. You can apply tags to data from a particular area, taken from a specific camera, and more. You can then use these tags to search through data for images matching a tag and generate versions of a dataset with images that contain a particular tag or set of tags.
-
+
Models hosted on Roboflow can be used with Label Assist, an automated annotation tool that uses your YOLOv8 model to recommend annotations. To use Label Assist, first upload a YOLOv8 model to Roboflow (see instructions later in the guide). Then, click the magic wand icon in the left sidebar and select your model for use in Label Assist.
@@ -96,13 +96,13 @@ Models hosted on Roboflow can be used with Label Assist, an automated annotation
Choose a model, then click "Continue" to enable Label Assist:
-
+
When you open new images for annotation, Label Assist will trigger and recommend annotations.
-
+
## Dataset Management for YOLOv8
@@ -114,13 +114,13 @@ First, you can use dataset search to find images that meet a semantic text descr
For example, the following text query finds images that contain people in a dataset:
-
+
You can narrow your search to images with a particular tag using the "Tags" selector:
-
+
Before you start training a model with your dataset, we recommend using Roboflow [Health Check](https://docs.roboflow.com/datasets/dataset-health-check), a web tool that provides an insight into your dataset and how you can improve the dataset prior to training a vision model.
@@ -128,7 +128,7 @@ Before you start training a model with your dataset, we recommend using Roboflow
To use Health Check, click the "Health Check" sidebar link. A list of statistics will appear that show the average size of images in your dataset, class balance, a heatmap of where annotations are in your images, and more.
-
+
Health Check may recommend changes to help enhance dataset performance. For example, the class balance feature may show that there is an imbalance in labels that, if solved, may boost performance or your model.
@@ -138,19 +138,19 @@ Health Check may recommend changes to help enhance dataset performance. For exam
To export your data, you will need a dataset version. A version is a state of your dataset frozen-in-time. To create a version, first click "Versions" in the sidebar. Then, click the "Create New Version" button. On this page, you will be able to choose augmentations and preprocessing steps to apply to your dataset:
-
+
For each augmentation you select, a pop-up will appear allowing you to tune the augmentation to your needs. Here is an example of tuning a brightness augmentation within specified parameters:
-
+
When your dataset version has been generated, you can export your data into a range of formats. Click the "Export Dataset" button on your dataset version page to export your data:
-
+
You are now ready to train YOLOv8 on a custom dataset. Follow this [written guide](https://blog.roboflow.com/how-to-train-yolov8-on-a-custom-dataset/) and [YouTube video](https://www.youtube.com/watch?v=wuZtUMEiKWY) for step-by-step instructions or refer to the [Ultralytics documentation](https://docs.ultralytics.com/modes/train/).
@@ -181,7 +181,7 @@ When you run the code above, you will be asked to authenticate. Then, your model
To test your model and find deployment instructions for supported SDKs, go to the "Deploy" tab in the Roboflow sidebar. At the top of this page, a widget will appear with which you can test your model. You can use your webcam for live testing or upload images or videos.
-
+
You can also use your uploaded model as a [labeling assistant](https://docs.roboflow.com/annotate/use-roboflow-annotate/model-assisted-labeling). This feature uses your trained model to recommend annotations on images uploaded to Roboflow.
@@ -195,13 +195,13 @@ Once you have uploaded a model to Roboflow, you can access our model evaluation
To access a confusion matrix, go to your model page on the Roboflow dashboard, then click "View Detailed Evaluation":
-
+
A pop-up will appear showing a confusion matrix:
-
+
Hover over a box on the confusion matrix to see the value associated with the box. Click on a box to see images in the respective category. Click on an image to view the model predictions and ground truth data associated with that image.
@@ -209,7 +209,7 @@ Hover over a box on the confusion matrix to see the value associated with the bo
For more insights, click Vector Analysis. This will show a scatter plot of the images in your dataset, calculated using CLIP. The closer images are in the plot, the more similar they are, semantically. Each image is represented as a dot with a color between white and red. The more red the dot, the worse the model performed.
-
+
You can use Vector Analysis to:
@@ -233,7 +233,7 @@ Want to learn more about using Roboflow for creating YOLOv8 models? The followin
Below are a few of the many pieces of feedback we have received for using YOLOv8 and Roboflow together to create computer vision models.
-
-
-
+
+
+
diff --git a/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md b/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
index 44dcb174..5ff8797d 100644
--- a/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
+++ b/docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
@@ -61,4 +61,4 @@ python detect.py --weights yolov5s.pt --source path/to/images # run inference o
python export.py --weights yolov5s.pt --include onnx coreml tflite # export models to other formats
```
-
+
diff --git a/docs/en/yolov5/index.md b/docs/en/yolov5/index.md
index 329c3641..d9303fc7 100644
--- a/docs/en/yolov5/index.md
+++ b/docs/en/yolov5/index.md
@@ -68,16 +68,16 @@ This badge signifies that all [YOLOv5 GitHub Actions](https://github.com/ultraly
diff --git a/docs/en/yolov5/tutorials/architecture_description.md b/docs/en/yolov5/tutorials/architecture_description.md
index cdd79292..3c8c6612 100644
--- a/docs/en/yolov5/tutorials/architecture_description.md
+++ b/docs/en/yolov5/tutorials/architecture_description.md
@@ -165,7 +165,7 @@ The YOLOv5 architecture makes some important changes to the box prediction strat


-
+
However, in YOLOv5, the formula for predicting the box coordinates has been updated to reduce grid sensitivity and prevent the model from predicting unbounded box dimensions.
@@ -178,11 +178,11 @@ The revised formulas for calculating the predicted bounding box are as follows:
Compare the center point offset before and after scaling. The center point offset range is adjusted from (0, 1) to (-0.5, 1.5). Therefore, offset can easily get 0 or 1.
-
+
Compare the height and width scaling ratio(relative to anchor) before and after adjustment. The original yolo/darknet box equations have a serious flaw. Width and Height are completely unbounded as they are simply out=exp(in), which is dangerous, as it can lead to runaway gradients, instabilities, NaN losses and ultimately a complete loss of training. [refer this issue](https://github.com/ultralytics/yolov5/issues/471#issuecomment-662009779)
-
+
### 4.4 Build Targets
@@ -204,15 +204,15 @@ This process follows these steps:

-
+
- If the calculated ratio is within the threshold, match the ground truth box with the corresponding anchor.
-
+
- Assign the matched anchor to the appropriate cells, keeping in mind that due to the revised center point offset, a ground truth box can be assigned to more than one anchor. Because the center point offset range is adjusted from (0, 1) to (-0.5, 1.5). GT Box can be assigned to more anchors.
-
+
This way, the build targets process ensures that each ground truth object is properly assigned and matched during the training process, allowing YOLOv5 to learn the task of object detection more effectively.
diff --git a/docs/en/yolov5/tutorials/clearml_logging_integration.md b/docs/en/yolov5/tutorials/clearml_logging_integration.md
index 43c8395c..056f30c9 100644
--- a/docs/en/yolov5/tutorials/clearml_logging_integration.md
+++ b/docs/en/yolov5/tutorials/clearml_logging_integration.md
@@ -22,15 +22,15 @@ keywords: ClearML, YOLOv5, Ultralytics, AI toolbox, training data, remote traini
🔭 Turn your newly trained YOLOv5 model into an API with just a few commands using ClearML Serving
-
+
And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline!
-
-
+
+

-
-
+
+
## 🦾 Setting Things Up
@@ -52,7 +52,7 @@ Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-t
That's it! You're done 😎
-
+
## 🚀 Training YOLOv5 With ClearML
@@ -95,7 +95,7 @@ That's a lot right? 🤯 Now, we can visualize all of this information in the Cl
There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!
-
+
## 🔗 Dataset Version Management
@@ -163,7 +163,7 @@ Now that you have a ClearML dataset, you can very simply use it to train custom
python train.py --img 640 --batch 16 --epochs 3 --data clearml:// --weights yolov5s.pt --cache
```
-
+
## 👀 Hyperparameter Optimization
diff --git a/docs/en/yolov5/tutorials/comet_logging_integration.md b/docs/en/yolov5/tutorials/comet_logging_integration.md
index c70d2920..d66ee68e 100644
--- a/docs/en/yolov5/tutorials/comet_logging_integration.md
+++ b/docs/en/yolov5/tutorials/comet_logging_integration.md
@@ -4,7 +4,7 @@ description: Learn how to set up and use Comet to enhance your YOLOv5 model trai
keywords: YOLOv5, Comet, Machine Learning, Ultralytics, Real time metrics tracking, Hyperparameters, Model checkpoints, Model predictions, YOLOv5 training, Comet Credentials
---
-
+
# YOLOv5 with Comet
diff --git a/docs/en/yolov5/tutorials/model_ensembling.md b/docs/en/yolov5/tutorials/model_ensembling.md
index 3a3c2a7f..e7e12005 100644
--- a/docs/en/yolov5/tutorials/model_ensembling.md
+++ b/docs/en/yolov5/tutorials/model_ensembling.md
@@ -127,7 +127,7 @@ Results saved to runs/detect/exp2
Done. (0.223s)
```
-
+
## Environments
diff --git a/docs/en/yolov5/tutorials/model_export.md b/docs/en/yolov5/tutorials/model_export.md
index 192de827..05169f11 100644
--- a/docs/en/yolov5/tutorials/model_export.md
+++ b/docs/en/yolov5/tutorials/model_export.md
@@ -134,10 +134,10 @@ Visualize: https://netron.app/
```
The 3 exported models will be saved alongside the original PyTorch model:
-
+
[Netron Viewer](https://github.com/lutzroeder/netron) is recommended for visualizing exported models:
-
+
## Exported Model Usage Examples
diff --git a/docs/en/yolov5/tutorials/neural_magic_pruning_quantization.md b/docs/en/yolov5/tutorials/neural_magic_pruning_quantization.md
index a0754772..08b448c3 100644
--- a/docs/en/yolov5/tutorials/neural_magic_pruning_quantization.md
+++ b/docs/en/yolov5/tutorials/neural_magic_pruning_quantization.md
@@ -27,7 +27,7 @@ This guide explains how to deploy YOLOv5 with Neural Magic's DeepSparse.
DeepSparse is an inference runtime with exceptional performance on CPUs. For instance, compared to the ONNX Runtime baseline, DeepSparse offers a 5.8x speed-up for YOLOv5s, running on the same machine!
-
+
For the first time, your deep learning workloads can meet the performance demands of production without the complexity and costs of hardware accelerators. Put simply, DeepSparse gives you the performance of GPUs and the simplicity of software:
@@ -43,7 +43,7 @@ DeepSparse takes advantage of model sparsity to gain its performance speedup.
Sparsification through pruning and quantization is a broadly studied technique, allowing order-of-magnitude reductions in the size and compute needed to execute a network, while maintaining high accuracy. DeepSparse is sparsity-aware, meaning it skips the zeroed out parameters, shrinking amount of compute in a forward pass. Since the sparse computation is now memory bound, DeepSparse executes the network depth-wise, breaking the problem into Tensor Columns, vertical stripes of computation that fit in cache.
-
+
Sparse networks with compressed computation, executed depth-wise in cache, allows DeepSparse to deliver GPU-class performance on CPUs!
@@ -162,7 +162,7 @@ deepsparse.object_detection.annotate --model_filepath zoo:cv/detection/yolov5-s/
Running the above command will create an `annotation-results` folder and save the annotated image inside.
-
+
## Benchmarking Performance
diff --git a/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md b/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md
index 5d9a10ad..31f0750c 100644
--- a/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md
+++ b/docs/en/yolov5/tutorials/pytorch_hub_model_loading.md
@@ -76,7 +76,8 @@ results.pandas().xyxy[0] # im1 predictions (pandas)
# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
```
-
+
+
For all inference options see YOLOv5 `AutoShape()` forward [method](https://github.com/ultralytics/yolov5/blob/30e4c4f09297b67afedf8b2bcd851833ddc9dead/models/common.py#L243-L252).
diff --git a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
index 8f72af4e..80a28310 100644
--- a/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
+++ b/docs/en/yolov5/tutorials/roboflow_datasets_integration.md
@@ -49,4 +49,4 @@ We have released a custom training tutorial demonstrating all of the above capab
The real world is messy and your model will invariably encounter situations your dataset didn't anticipate. Using [active learning](https://blog.roboflow.com/what-is-active-learning/) is an important strategy to iteratively improve your dataset and model. With the Roboflow and YOLOv5 integration, you can quickly make improvements on your model deployments by using a battle tested machine learning pipeline.
-
+
diff --git a/docs/en/yolov5/tutorials/running_on_jetson_nano.md b/docs/en/yolov5/tutorials/running_on_jetson_nano.md
index 1cb47454..86846b95 100644
--- a/docs/en/yolov5/tutorials/running_on_jetson_nano.md
+++ b/docs/en/yolov5/tutorials/running_on_jetson_nano.md
@@ -216,7 +216,7 @@ uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.m
deepstream-app -c deepstream_app_config.txt
```
-
+
The above result is running on **Jetson Xavier NX** with **FP32** and **YOLOv5s 640x640**. We can see that the **FPS** is around **30**.
@@ -299,7 +299,7 @@ network-mode=1
deepstream-app -c deepstream_app_config.txt
```
-
+
The above result is running on **Jetson Xavier NX** with **INT8** and **YOLOv5s 640x640**. We can see that the **FPS** is around **60**.
diff --git a/docs/en/yolov5/tutorials/test_time_augmentation.md b/docs/en/yolov5/tutorials/test_time_augmentation.md
index d9c00398..1ba33de6 100644
--- a/docs/en/yolov5/tutorials/test_time_augmentation.md
+++ b/docs/en/yolov5/tutorials/test_time_augmentation.md
@@ -121,7 +121,7 @@ Results saved to runs/detect/exp
Done. (0.156s)
```
-
+
### PyTorch Hub TTA
diff --git a/docs/en/yolov5/tutorials/train_custom_data.md b/docs/en/yolov5/tutorials/train_custom_data.md
index 4713e05b..4fd52901 100644
--- a/docs/en/yolov5/tutorials/train_custom_data.md
+++ b/docs/en/yolov5/tutorials/train_custom_data.md
@@ -19,7 +19,7 @@ pip install -r requirements.txt # install
## Train On Custom Data
-
+
@@ -46,7 +46,7 @@ If this is not possible, you can start from [a public dataset](https://universe.
Once you have collected images, you will need to annotate the objects of interest to create a ground truth for your model to learn from.
-
+
[Roboflow Annotate](https://roboflow.com/annotate?ref=ultralytics) is a simple web-based tool for managing and labeling your images with your team and exporting them in [YOLOv5's annotation format](https://roboflow.com/formats/yolov5-pytorch-txt?ref=ultralytics).
@@ -59,18 +59,18 @@ and upload your dataset to a `Public` workspace, label any unannotated images, t
Note: YOLOv5 does online augmentation during training, so we do not recommend applying any augmentation steps in Roboflow for training with YOLOv5. But we recommend applying the following preprocessing steps:
-
+
* **Auto-Orient** - to strip EXIF orientation from your images.
* **Resize (Stretch)** - to the square input size of your model (640x640 is the YOLOv5 default).
Generating a version will give you a point in time snapshot of your dataset so you can always go back and compare your future model training runs against it, even if you add more images or change its configuration later.
-
+
Export in `YOLOv5 Pytorch` format, then copy the snippet into your training script or notebook to download your dataset.
-
+
Now continue with `2. Select a Model`.
@@ -106,14 +106,14 @@ After using an annotation tool to label your images, export your labels to **YOL
- One row per object
- Each row is `class x_center y_center width height` format.
-- Box coordinates must be in **normalized xywh** format (from 0 - 1). If your boxes are in pixels, divide `x_center` and `width` by image width, and `y_center` and `height` by image height.
+- Box coordinates must be in **normalized xywh** format (from 0 to 1). If your boxes are in pixels, divide `x_center` and `width` by image width, and `y_center` and `height` by image height.
- Class numbers are zero-indexed (start from 0).
-
+
The label file corresponding to the above image contains 2 persons (class `0`) and a tie (class `27`):
-
+
### 1.3 Organize Directories
@@ -124,14 +124,14 @@ Organize your train and val images and labels according to the example below. YO
../datasets/coco128/labels/im0.txt # label
```
-
+
### 2. Select a Model
Select a pretrained model to start training from. Here we select [YOLOv5s](https://github.com/ultralytics/yolov5/blob/master/models/yolov5s.yaml), the second-smallest and fastest model available. See our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints) for a full comparison of all models.
-
+
### 3. Train
@@ -168,7 +168,7 @@ python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt #
To learn more about all the supported Comet features for this integration, check out the [Comet Tutorial](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration). If you'd like to learn more about Comet, head over to our [documentation](https://bit.ly/yolov5-colab-comet-docs). Get started by trying out the Comet Colab Notebook:
[](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing)
-
+
#### ClearML Logging and Automation 🌟 NEW
@@ -182,7 +182,7 @@ You'll get all the great expected features from an experiment manager: live upda
You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the [ClearML Tutorial](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration) for details!
-
+
#### Local Logging
@@ -190,7 +190,7 @@ Training results are automatically logged with [Tensorboard](https://www.tensorf
This directory contains train and val statistics, mosaics, labels, predictions and augmented mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices.
-
+
Results file `results.csv` is updated after each epoch, and then plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:
diff --git a/docs/en/yolov5/tutorials/transfer_learning_with_frozen_layers.md b/docs/en/yolov5/tutorials/transfer_learning_with_frozen_layers.md
index a40fa4ba..5fd3376d 100644
--- a/docs/en/yolov5/tutorials/transfer_learning_with_frozen_layers.md
+++ b/docs/en/yolov5/tutorials/transfer_learning_with_frozen_layers.md
@@ -124,19 +124,19 @@ train.py --batch 48 --weights yolov5m.pt --data voc.yaml --epochs 50 --cache --i
The results show that freezing speeds up training, but reduces final accuracy slightly.
-
+
-
+
-
+
### GPU Utilization Comparison
Interestingly, the more modules are frozen the less GPU memory is required to train, and the lower GPU utilization. This indicates that larger models, or models trained at larger --image-size may benefit from freezing in order to train faster.
-
+
-
+
## Environments
diff --git a/docs/es/index.md b/docs/es/index.md
index 81ea747d..163f4c7d 100644
--- a/docs/es/index.md
+++ b/docs/es/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, detección de objetos, segmentación de imágenes
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, detección de objetos, segmentación de imágenes
-
+
diff --git a/docs/fr/index.md b/docs/fr/index.md
index 52717cdb..be3e9477 100644
--- a/docs/fr/index.md
+++ b/docs/fr/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, détection d'objets, segmentation d'images, appre
-
+
-
+
-
+
-
+
-
+
-
+
diff --git a/docs/hi/index.md b/docs/hi/index.md
index 35359b71..f06b6b8e 100644
--- a/docs/hi/index.md
+++ b/docs/hi/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, वस्तु पता लगाना, छव
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, वस्तु पता लगाना, छव
-
+
diff --git a/docs/ja/index.md b/docs/ja/index.md
index 4eca5e41..97f5ec6e 100644
--- a/docs/ja/index.md
+++ b/docs/ja/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, オブジェクト検出, 画像セグメンテ
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, オブジェクト検出, 画像セグメンテ
-
+
diff --git a/docs/ko/index.md b/docs/ko/index.md
index 6706d45e..cf6acbe7 100644
--- a/docs/ko/index.md
+++ b/docs/ko/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, 객체 탐지, 이미지 분할, 기계 학습,
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, 객체 탐지, 이미지 분할, 기계 학습,
-
+
diff --git a/docs/pt/index.md b/docs/pt/index.md
index e709c04e..cc87e6ee 100644
--- a/docs/pt/index.md
+++ b/docs/pt/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, detecção de objetos, segmentação de imagens,
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, detecção de objetos, segmentação de imagens,
-
+
diff --git a/docs/ru/index.md b/docs/ru/index.md
index 1e07272a..aac44064 100644
--- a/docs/ru/index.md
+++ b/docs/ru/index.md
@@ -10,17 +10,17 @@ keywords: Ultralytics, YOLOv8, обнаружение объектов, сегм
-
+
-
+
-
+
-
+
-
+
-
+
@@ -29,7 +29,7 @@ keywords: Ultralytics, YOLOv8, обнаружение объектов, сегм
-
+
diff --git a/docs/update_translations.py b/docs/update_translations.py
index f9676eae..9c27c700 100644
--- a/docs/update_translations.py
+++ b/docs/update_translations.py
@@ -121,6 +121,27 @@ class MarkdownLinkFixer:
return match.group(0)
+ @staticmethod
+ def update_html_tags(content):
+ """Updates HTML tags in docs."""
+ alt_tag = 'MISSING'
+
+ # Remove closing slashes from self-closing HTML tags
+ pattern = re.compile(r'<([^>]+?)\s*/>')
+ content = re.sub(pattern, r'<\1>', content)
+
+ # Find all images without alt tags and add placeholder alt text
+ pattern = re.compile(r'!\[(.*?)\]\((.*?)\)')
+ content, num_replacements = re.subn(pattern, lambda match: f'})',
+ content)
+
+ # Add missing alt tags to HTML images
+ pattern = re.compile(r'
]*src=["\'](.*?)["\'][^>]*>')
+ content, num_replacements = re.subn(pattern, lambda match: match.group(0).replace('>', f' alt="{alt_tag}">', 1),
+ content)
+
+ return content
+
def process_markdown_file(self, md_file_path, lang_dir):
"""Process each markdown file in the language directory."""
print(f'Processing file: {md_file_path}')
@@ -134,6 +155,7 @@ class MarkdownLinkFixer:
content = self.replace_front_matter(content, lang_dir)
content = self.replace_admonitions(content, lang_dir)
content = self.update_iframe(content)
+ content = self.update_html_tags(content)
with open(md_file_path, 'w', encoding='utf-8') as file:
file.write(content)
diff --git a/docs/zh/index.md b/docs/zh/index.md
index a67abccc..b8c3ea80 100644
--- a/docs/zh/index.md
+++ b/docs/zh/index.md
@@ -12,17 +12,17 @@ keywords: Ultralytics, YOLOv8, 目标检测, 图像分割, 机器学习, 深度
-
+
-
+
-
+
-
+
-
+
-
+
@@ -31,7 +31,7 @@ keywords: Ultralytics, YOLOv8, 目标检测, 图像分割, 机器学习, 深度
-
+
diff --git a/examples/YOLOv8-Region-Counter/readme.md b/examples/YOLOv8-Region-Counter/readme.md
index 9c0ad168..2acf0a55 100644
--- a/examples/YOLOv8-Region-Counter/readme.md
+++ b/examples/YOLOv8-Region-Counter/readme.md
@@ -4,10 +4,9 @@
- Regions can be adjusted to suit the user's preferences and requirements.
diff --git a/examples/YOLOv8-Segmentation-ONNXRuntime-Python/README.md b/examples/YOLOv8-Segmentation-ONNXRuntime-Python/README.md
index 98e53ce3..9327f1fa 100644
--- a/examples/YOLOv8-Segmentation-ONNXRuntime-Python/README.md
+++ b/examples/YOLOv8-Segmentation-ONNXRuntime-Python/README.md
@@ -43,7 +43,7 @@ python main.py --model-path --source
After running the command, you should see segmentation results similar to this:
-
+
## Advanced Usage
diff --git a/ultralytics/trackers/README.md b/ultralytics/trackers/README.md
index 7bbbaded..2cab3c04 100644
--- a/ultralytics/trackers/README.md
+++ b/ultralytics/trackers/README.md
@@ -1,6 +1,6 @@
# Multi-Object Tracking with Ultralytics YOLO
-
+
Object tracking in the realm of video analytics is a critical task that not only identifies the location and class of objects within the frame but also maintains a unique ID for each detected object as the video progresses. The applications are limitless—ranging from surveillance and security to real-time sports analytics.