diff --git a/docs/en/guides/model-deployment-options.md b/docs/en/guides/model-deployment-options.md index d0d7b325..a487ea4d 100644 --- a/docs/en/guides/model-deployment-options.md +++ b/docs/en/guides/model-deployment-options.md @@ -240,9 +240,9 @@ PaddlePaddle is an open-source deep learning framework developed by Baidu. It is - **Hardware Acceleration**: Supports various hardware accelerations, including Baidu's own Kunlun chips. -#### ncnn +#### NCNN -ncnn is a high-performance neural network inference framework optimized for the mobile platform. It stands out for its lightweight nature and efficiency, making it particularly well-suited for mobile and embedded devices where resources are limited. +NCNN is a high-performance neural network inference framework optimized for the mobile platform. It stands out for its lightweight nature and efficiency, making it particularly well-suited for mobile and embedded devices where resources are limited. - **Performance Benchmarks**: Highly optimized for mobile platforms, offering efficient inference on ARM-based devices. @@ -276,7 +276,7 @@ The following table provides a snapshot of the various deployment options availa | TF Edge TPU | Optimized for Google's Edge TPU hardware | Exclusive to Edge TPU devices | Growing with Google and third-party resources | IoT devices requiring real-time processing | Improvements for new Edge TPU hardware | Google's robust IoT security | Custom-designed for Google Coral | | TF.js | Reasonable in-browser performance | High with web technologies | Web and Node.js developers support | Interactive web applications | TensorFlow team and community contributions | Web platform security model | Enhanced with WebGL and other APIs | | PaddlePaddle | Competitive, easy to use and scalable | Baidu ecosystem, wide application support | Rapidly growing, especially in China | Chinese market and language processing | Focus on Chinese AI applications | Emphasizes data privacy and security | Including Baidu's Kunlun chips | -| ncnn | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations | +| NCNN | Optimized for mobile ARM-based devices | Mobile and embedded ARM systems | Niche but active mobile/embedded ML community | Android and ARM systems efficiency | High performance maintenance on ARM | On-device security advantages | ARM CPUs and GPUs optimizations | This comparative analysis gives you a high-level overview. For deployment, it's essential to consider the specific requirements and constraints of your project, and consult the detailed documentation and resources available for each option. diff --git a/docs/en/modes/benchmark.md b/docs/en/modes/benchmark.md index 4f22ee8d..7f8e4573 100644 --- a/docs/en/modes/benchmark.md +++ b/docs/en/modes/benchmark.md @@ -101,6 +101,6 @@ Benchmarks will attempt to run automatically on all possible export formats belo | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. diff --git a/docs/en/modes/export.md b/docs/en/modes/export.md index 61576cf9..5859b18b 100644 --- a/docs/en/modes/export.md +++ b/docs/en/modes/export.md @@ -108,4 +108,4 @@ Available YOLOv8 export formats are in the table below. You can export to any fo | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | diff --git a/docs/en/modes/predict.md b/docs/en/modes/predict.md index 64a3aa42..d3a9d47d 100644 --- a/docs/en/modes/predict.md +++ b/docs/en/modes/predict.md @@ -683,7 +683,7 @@ The `plot()` method in `Results` objects facilitates visualization of prediction for i, r in enumerate(results): # Plot results image im_bgr = r.plot() # BGR-order numpy array - im_rgb = Image.fromarray(im_array[..., ::-1]) # RGB-order PIL image + im_rgb = Image.fromarray(im_bgr[..., ::-1]) # RGB-order PIL image # Show results to screen (in supported environments) r.show() diff --git a/docs/en/tasks/classify.md b/docs/en/tasks/classify.md index 5608c985..02b9a4bc 100644 --- a/docs/en/tasks/classify.md +++ b/docs/en/tasks/classify.md @@ -176,6 +176,6 @@ Available YOLOv8-cls export formats are in the table below. You can predict or v | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-cls_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-cls_ncnn_model/` | ✅ | `imgsz`, `half` | See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. diff --git a/docs/en/tasks/detect.md b/docs/en/tasks/detect.md index a53877d5..b5d27009 100644 --- a/docs/en/tasks/detect.md +++ b/docs/en/tasks/detect.md @@ -177,6 +177,6 @@ Available YOLOv8 export formats are in the table below. You can predict or valid | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. diff --git a/docs/en/tasks/obb.md b/docs/en/tasks/obb.md index 47f713bc..c9169098 100644 --- a/docs/en/tasks/obb.md +++ b/docs/en/tasks/obb.md @@ -186,6 +186,6 @@ Available YOLOv8-obb export formats are in the table below. You can predict or v | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-obb_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-obb_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-obb_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-obb_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-obb_ncnn_model/` | ✅ | `imgsz`, `half` | See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. diff --git a/docs/en/tasks/pose.md b/docs/en/tasks/pose.md index 2c3584c4..431164b1 100644 --- a/docs/en/tasks/pose.md +++ b/docs/en/tasks/pose.md @@ -180,6 +180,6 @@ Available YOLOv8-pose export formats are in the table below. You can predict or | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-pose_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-pose_ncnn_model/` | ✅ | `imgsz`, `half` | See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. diff --git a/docs/en/tasks/segment.md b/docs/en/tasks/segment.md index b06bc94f..921378a4 100644 --- a/docs/en/tasks/segment.md +++ b/docs/en/tasks/segment.md @@ -182,6 +182,6 @@ Available YOLOv8-seg export formats are in the table below. You can predict or v | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-seg_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n-seg_ncnn_model/` | ✅ | `imgsz`, `half` | See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. diff --git a/docs/en/usage/cli.md b/docs/en/usage/cli.md index 55d01e0f..c71d7d06 100644 --- a/docs/en/usage/cli.md +++ b/docs/en/usage/cli.md @@ -184,7 +184,7 @@ Available YOLOv8 export formats are in the table below. You can export to any fo | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` | -| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | +| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` | ## Overriding default arguments diff --git a/ultralytics/engine/exporter.py b/ultralytics/engine/exporter.py index b68595f5..afe7d3e6 100644 --- a/ultralytics/engine/exporter.py +++ b/ultralytics/engine/exporter.py @@ -16,7 +16,7 @@ TensorFlow Lite | `tflite` | yolov8n.tflite TensorFlow Edge TPU | `edgetpu` | yolov8n_edgetpu.tflite TensorFlow.js | `tfjs` | yolov8n_web_model/ PaddlePaddle | `paddle` | yolov8n_paddle_model/ -ncnn | `ncnn` | yolov8n_ncnn_model/ +NCNN | `ncnn` | yolov8n_ncnn_model/ Requirements: $ pip install "ultralytics[export]" @@ -293,7 +293,7 @@ class Exporter: f[9], _ = self.export_tfjs() if paddle: # PaddlePaddle f[10], _ = self.export_paddle() - if ncnn: # ncnn + if ncnn: # NCNN f[11], _ = self.export_ncnn() # Finish @@ -496,14 +496,14 @@ class Exporter: return f, None @try_export - def export_ncnn(self, prefix=colorstr("ncnn:")): + def export_ncnn(self, prefix=colorstr("NCNN:")): """ - YOLOv8 ncnn export using PNNX https://github.com/pnnx/pnnx. + YOLOv8 NCNN export using PNNX https://github.com/pnnx/pnnx. """ check_requirements("ncnn") import ncnn # noqa - LOGGER.info(f"\n{prefix} starting export with ncnn {ncnn.__version__}...") + LOGGER.info(f"\n{prefix} starting export with NCNN {ncnn.__version__}...") f = Path(str(self.file).replace(self.file.suffix, f"_ncnn_model{os.sep}")) f_ts = self.file.with_suffix(".torchscript") diff --git a/ultralytics/nn/autobackend.py b/ultralytics/nn/autobackend.py index 3fafbbd9..4d8c69c5 100644 --- a/ultralytics/nn/autobackend.py +++ b/ultralytics/nn/autobackend.py @@ -72,7 +72,7 @@ class AutoBackend(nn.Module): | TensorFlow Lite | *.tflite | | TensorFlow Edge TPU | *_edgetpu.tflite | | PaddlePaddle | *_paddle_model | - | ncnn | *_ncnn_model | + | NCNN | *_ncnn_model | This class offers dynamic backend switching capabilities based on the input model format, making it easier to deploy models across various platforms. @@ -304,9 +304,9 @@ class AutoBackend(nn.Module): input_handle = predictor.get_input_handle(predictor.get_input_names()[0]) output_names = predictor.get_output_names() metadata = w.parents[1] / "metadata.yaml" - elif ncnn: # ncnn - LOGGER.info(f"Loading {w} for ncnn inference...") - check_requirements("git+https://github.com/Tencent/ncnn.git" if ARM64 else "ncnn") # requires ncnn + elif ncnn: # NCNN + LOGGER.info(f"Loading {w} for NCNN inference...") + check_requirements("git+https://github.com/Tencent/ncnn.git" if ARM64 else "ncnn") # requires NCNN import ncnn as pyncnn net = pyncnn.Net() @@ -431,7 +431,7 @@ class AutoBackend(nn.Module): self.input_handle.copy_from_cpu(im) self.predictor.run() y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names] - elif self.ncnn: # ncnn + elif self.ncnn: # NCNN mat_in = self.pyncnn.Mat(im[0].cpu().numpy()) ex = self.net.create_extractor() input_names, output_names = self.net.input_names(), self.net.output_names() diff --git a/ultralytics/utils/benchmarks.py b/ultralytics/utils/benchmarks.py index f925b2d4..b98448da 100644 --- a/ultralytics/utils/benchmarks.py +++ b/ultralytics/utils/benchmarks.py @@ -21,7 +21,7 @@ TensorFlow Lite | `tflite` | yolov8n.tflite TensorFlow Edge TPU | `edgetpu` | yolov8n_edgetpu.tflite TensorFlow.js | `tfjs` | yolov8n_web_model/ PaddlePaddle | `paddle` | yolov8n_paddle_model/ -ncnn | `ncnn` | yolov8n_ncnn_model/ +NCNN | `ncnn` | yolov8n_ncnn_model/ """ import glob