diff --git a/.jenkins/check/config/filter_linklint.txt b/.jenkins/check/config/filter_linklint.txt
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..93b2f97ac538b9872e979490a1645f65037c2385 100644
--- a/.jenkins/check/config/filter_linklint.txt
+++ b/.jenkins/check/config/filter_linklint.txt
@@ -0,0 +1,2 @@
+https://www.mindspore.cn/sig/MindSpore%20Lite
+https://www.mindspore.cn/sig/MindSpore%20Lite
diff --git a/README.md b/README.md
index a9101d90b33decc0a4248c5166999c8ccf1d5bf7..475c5c8ca9174e44b3d935875a42b159c8f05136 100644
--- a/README.md
+++ b/README.md
@@ -4,52 +4,124 @@
MindSpore Lite provides lightweight AI inference acceleration capabilities for different hardware devices, enabling intelligent applications and providing end-to-end solutions for developers. It offers development friendly, efficient, and flexible deployment experiences for algorithm engineers and data scientists, helping the AI software and hardware application ecosystem thrive. In the future, MindSpore Lite will work with the MindSpore AI community to enrich the AI software and hardware application ecosystem.
-
-
For more details please check out our [MindSpore Lite Architecture Guide](https://www.mindspore.cn/lite/docs/en/master/reference/architecture_lite.html).
-### MindSpore Lite features
+## Example
+
+MindSpore Lite achieves double the inference performance for AIGC, speech algorithms, and CV model inference, and has been deployed in Huawei's flagship smartphones for commercial use. As shown in the figure below, MindSpore Lite supports image style transfer and image segmentation for CV algorithms.
+
+
+
+## Quick Start
+
+1. Compile
+
+ MindSpore Lite has multiple different hardware backends, including:
+
+ - For service side devices, users can compile dynamic libraries and Python wheel packages by setting compilation options such as `MSLITE_ENABLE_CLOUD_INFERENCE` for inference of upgrade and CPU hardware. For detailed compilation tutorials, please refer to [the official website of MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/mindir/build.html).
+
+ - For end and edge devices, different dynamic libraries can be compiled through different cross compilation toolchains. For detailed compilation tutorials, please refer to [the official website of MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/build/build.html).
+
+2. Model conversion
+
+ MindSpore Lite supports the conversion of models serialized from various AI frameworks such as MindSpore, ONNX, TF, etc. into MindSpore Lite format IR. In order to achieve more efficient model inference, MindSpore Lite supports the conversion of models into `.ms` format or `.mindir` format, where:
+
+ - The `.mindir` model is used for inference on service side devices and is more compatible with the model structure exported by the MindSpore training framework. It is mainly suitable for Ascend cards and X86/Arm architecture CPU hardware. For detailed conversion methods, please refer to [the Conversion Tool Tutorial](https://www.mindspore.cn/lite/docs/en/master/mindir/converter.html).
+
+ - The `.ms` model is mainly used for inference of end and edge devices, and is mainly suitable for terminal hardware such as Kirin NPU and Arm architecture CPU. In order to better reduce the size of the model file, the `.ms` model is serialized and deserialized through protobuffer. For detailed instructions on how to use the conversion tool, please refer to [the Conversion Tool](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html)
+
+3. Model inference
+
+ MindSpore Lite provides three APIs: Python, C++, and Java, and complete usage cases for the corresponding APIs:
+
+ - Python API Interface Use Case
+
+ [`.mindir` Reasoning Case Based on Python Interface](https://www.mindspore.cn/lite/docs/en/master/mindir/runtime_python.html)
+
+ - C/C++ Complete Use Cases
+
+ [`.mindir` model based on C/C++ Interface Inference Use Case](https://www.mindspore.cn/lite/docs/en/master/mindir/runtime_cpp.html)
+
+ [`.ms` Model Based on C/C++ Interface Reasoning Case](https://developer.huawei.com/consumer/en/doc/harmonyos-guides/mindspore-guidelines-based-native)
+
+ - Complete Java Use Cases
+
+ [`.mindir` model based on Java interface inference use case](https://www.mindspore.cn/lite/docs/en/master/mindir/runtime_java.html)
+
+## Technical Solution
+
+### MindSpore Lite Features
+
+
1. Terminal and Cloud one-stop inference deployment
- - Provide end-to-end processes for model transformation optimization, deployment, and inference.
- - The unified IR realizes the device-cloud AI application integration.
+
+ - Provide end-to-end processes for model transformation optimization, deployment, and inference.
+
+ - The unified IR realizes the device-cloud AI application integration.
2. Lightweight
- - Provides model compress, which could help to improve performance as well.
- - Provides the ultra-lightweight reasoning solution MindSpore Lite Micro to meet the deployment requirements in extreme environments such as smart watches and headphones.
+
+ - Provides model compression, which could help to improve performance as well.
+
+ - Provides the ultra-lightweight reasoning solution MindSpore Lite Micro to meet the deployment requirements in extreme environments such as smart watches and headphones.
3. High-performance
- - The built-in high-performance kernel computing library NNACL supports high-performance inference for dedicated chips such as CPU, NNRt, and Ascend, maximizing hardware computing power while minimizing inference latency and power consumption.
- - Assembly code to improve performance of kernel operators. Supports CPU, GPU, and NPU.
+
+ - The built-in high-performance kernel computing library NNACL supports high-performance inference for dedicated chips such as CPU, NNRt, and Ascend, maximizing hardware computing power while minimizing inference latency and power consumption.
+
+ - Assembly code to improve performance of kernel operators. Supports CPU, GPU, and NPU.
4. Versatility
- - Support deployment of multiple hardware such as server-side Ascend and CPU.
- - Supports HarmonyOS and Android mobile operating systems.
-## MindSpore Lite AI deployment procedure
+ - Supports deployment of multiple hardware such as server-side Ascend and CPU.
+
+ - Supports HarmonyOS and Android mobile operating systems.
+
+## Further Understanding of MindSpore Lite
+
+If you wish to further learn and use MindSpore Lite, please refer to the following content:
+
+### API and documentation
+
+1. API documentation:
+
+ - [C++ API documentation](https://www.mindspore.cn/lite/api/en/master/api_cpp/mindspore.html)
+
+ - [Java API documentation](https://www.mindspore.cn/lite/api/en/master/api_java/class_list.html)
+
+ - [Python API documentation](https://www.mindspore.cn/lite/api/en/master/mindspore_lite.html)
+
+ - [HarmonyOS API Document](https://developer.huawei.com/consumer/en/doc/harmonyos-references/development-intro-api)
+
+2. [MindSpore Lite Official Website Document](https://www.mindspore.cn/lite/docs/en/master/index.html)
+
+### Key characteristic capability
+
+- [Support Ascend hardware inference](https://www.mindspore.cn/lite/docs/en/master/mindir/runtime_python.html)
-1. Model selection and personalized training
+- [Supporting HarmonyOS](https://developer.huawei.com/consumer/cn/sdk/mindspore-lite-kit)
- Select a new model or use an existing model for incremental training using labeled data. When designing a model for mobile device, it is necessary to consider the model size, accuracy and calculation amount.
+- [Quantification after Training](https://www.mindspore.cn/lite/docs/en/master/advanced/quantization.html)
- The MindSpore Lite team provides a series of pre-training models used for image classification, object detection. You can use these pre-trained models in your application.
+- [Lightweight Micro inference deployment](https://www.mindspore.cn/lite/docs/en/master/advanced/micro.html#%20Model%20inference%20code%20generation)
- The pre-trained model provided by MindSpore: [Image Classification](https://download.mindspore.cn/model_zoo/official/lite/). More models will be provided in the feature.
+- [Benchmark Debugging Tool](https://www.mindspore.cn/lite/docs/en/master/tools/benchmark.html)
- MindSpore allows you to retrain pre-trained models to perform other tasks.
+## Communication and Feedback
-2. Model converter and optimization
+- Welcome to [Gitee Issues](https://gitee.com/mindspore/mindspore-lite/issues): submit questions, reports, and suggestions;
- If you use MindSpore or a third-party model, you need to use [MindSpore Lite Model Converter Tool](https://www.mindspore.cn/lite/docs/en/master/converter/converter_tool.html) to convert the model into MindSpore Lite model. The MindSpore Lite model converter tool provides the converter of TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure.
+- Welcome to [Community Forum](https://discuss.mindspore.cn/c/mindspore-lite/38): engage in technical and problem-solving exchanges;
- MindSpore Lite also provides a tool to convert models running on IoT devices .
+- Welcome to [Sig](https://www.mindspore.cn/sig/MindSpore%20Lite): to manage and improve workflow, participate in discussions and exchanges;
-3. Model deployment
+## Surrounding communities
- This stage mainly realizes model deployment, including model management, deployment, operation and maintenance monitoring, etc.
+- [MindSpore](https://gitee.com/mindspore/mindspore)
-4. Inference
+- [MindOne](https://github.com/mindspore-lab/mindone)
- Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/docs/en/master/infer/runtime_cpp.html) is the process of running input data through the model to get output.
+- [Mindyolo](https://github.com/mindspore-lab/mindyolo)
- MindSpore Lite provides pre-trained model that can be deployed on mobile device [example](https://www.mindspore.cn/lite/examples/en).
+- [OpenHarmony](https://gitcode.com/openharmony/third_party_mindspore)
diff --git a/README_CN.md b/README_CN.md
index 32fd121d49d090483df773742d9697e49aafdb3a..69cf2eb87d66351f0257599961730fa3c5982cde 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -1,62 +1,126 @@
[View English](./README.md)
-## MindSpore Lite介绍
+## MindSpore Lite简介
MindSpore Lite面向不同硬件设备提供轻量化AI推理加速能力,使能智能应用,为开发者提供端到端的解决方案,为算法工程师和数据科学家提供开发友好、运行高效、部署灵活的体验,帮助人工智能软硬件应用生态繁荣发展,未来MindSpore Lite将与MindSpore AI社区一起,致力于丰富AI软硬件应用生态。
-
+## 效果示例
-欲了解更多详情,请查看我们的[MindSpore Lite 总体架构](https://www.mindspore.cn/lite/docs/zh-CN/master/reference/architecture_lite.html)。
+MindSpore Lite针对AIGC、语音类算法以及CV类模型推理,实现推理性能倍增,在华为多款旗舰手机落地商用落地。如下图所示,MindSpore Lite支持CV算法的图像风格迁移与图像分割。
-## MindSpore Lite技术特点
+
+
+## 快速入门
+
+1. 编译
+
+ MindSpore Lite支持多种不同的硬件后端,其中:
+
+ - 针对服务侧设备,用户可以通过设置`MSLITE_ENABLE_CLOUD_INFERENCE`等编译选项,编译出动态库以及Python wheel包,用于昇腾、CPU硬件的推理,详细编译教程,可以参考[MindSpore Lite官网](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/build.html)。
+
+ - 针对端、边设备,可以通过不同的交叉编译工具链编译出不同的动态库,详细编译教程,可以参考[MindSpore Lite官网](https://www.mindspore.cn/lite/docs/zh-CN/master/build/build.html)。
+
+2. 模型转换
+
+ MindSpore Lite支持将MindSpore、ONNX、TF等多种AI框架序列化出的模型转换成MindSpore Lite格式的IR,为了更高效地实现模型推理,MindSpore Lite支持将模型转换成`.mindir`格式或`.ms`格式,其中:
+
+ - `.mindir`模型:用于服务侧设备的推理,可以更好地兼容MindSpore训练框架导出的模型结构,主要适用于昇腾卡,以及X86/Arm架构的CPU硬件,详细的转换方法可以参考[转换工具教程](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/converter.html)。
+
+ - `.ms`模型:主要用于端、边设备的推理,主要适用于麒麟NPU、Arm架构CPU等终端硬件。为了更好地降低模型文件大小,`.ms`的模型通过protobuffer进行序列化与反序列化,详细的转换工具使用方式可以参考[转换教程](https://www.mindspore.cn/lite/docs/zh-CN/master/converter/converter_tool.html)。
+
+3. 模型推理
+
+ MindSpore Lite提供了Python、C++、Java三种API,并提供了对应API的完整使用用例:
+
+ - Python API接口用例
+
+ [`.mindir`基于Python接口推理用例](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/runtime_python.html)
+
+ - C/C++完整用例
+
+ [`.mindir`模型基于C/C++接口推理用例](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/runtime_cpp.html)
+
+ [`.ms`模型基于C/C++接口推理用例](https://developer.huawei.com/consumer/cn/doc/harmonyos-guides/mindspore-guidelines-based-native)
+
+ - Java完整用例
+
+ [`.mindir`模型基于Java接口推理用例](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/runtime_java.html)
+
+## 技术方案
+
+
+
+### MindSpore Lite技术特点
1. 端云一站式推理部署
- - 提供模型转换优化、部署和推理端到端流程。
- - 统一的IR实现端云AI应用一体化。
+ - 提供模型转换优化、部署和推理端到端流程。
+
+ - 统一的IR实现端云AI应用一体化。
2. 超轻量
- - 支持模型量化压缩,模型更小跑得更快。
- - 提供超轻量的推理解决方案MindSpore Lite Micro,满足智能手表、耳机等极限环境下的部署要求。
+ - 支持模型量化压缩,模型更小跑得更快。
+
+ - 提供超轻量的推理解决方案MindSpore Lite Micro,满足智能手表、耳机等极限环境下的部署要求。
3. 高性能推理
- - 自带的高性能内核计算库NNACL,支持CPU、NNRt、Ascend等专用芯片高性能推理,最大化发挥硬件算力,最小化推理时延和功耗。
- - 汇编级优化,支持CPU、GPU、NPU异构调度,最大化发挥硬件算力,最小化推理时延和功耗。
+ - 自带的高性能内核计算库NNACL,支持`CPU`、`NNRt`、`Ascend`等专用芯片高性能推理,最大化发挥硬件算力,最小化推理时延和功耗。
+
+ - 汇编级优化,支持`CPU`、`GPU`、`NPU`异构调度,最大化发挥硬件算力,最小化推理时延和功耗。
4. 广覆盖
- - 支持服务端Ascend、CPU等多硬件部署。
- - 支持鸿蒙、Android手机操作系统。
+ - 支持服务端`Ascend`、`CPU`等多硬件部署。
+
+ - 支持`鸿蒙`、`Android`手机操作系统。
+
+## 进一步了解MindSpore Lite
+
+如果您想要进一步学习和使用 MindSpore Lite,可以参考以下内容:
+
+### API与文档
+
+1. API文档:
+
+ - [C++ API文档](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html)
+
+ - [Java API文档](https://www.mindspore.cn/lite/api/zh-CN/master/api_java/class_list.html)
+
+ - [Python API文档](https://www.mindspore.cn/lite/api/zh-CN/master/mindspore_lite.html)
+
+ - [鸿蒙API文档](https://developer.huawei.com/consumer/cn/doc/harmonyos-references/capi-mindspore)
+
+2. [MindSpore Lite官网文档](https://www.mindspore.cn/lite/docs/zh-CN/master/index.html)
-## MindSpore Lite AI部署流程
+### 关键特性能力
-1. 模型选择和个性化训练
+- [支持昇腾硬件推理](https://www.mindspore.cn/lite/docs/zh-CN/master/mindir/runtime_python.html)
- 包括选择新模型或对已有模型,利用标注数据进行增量训练。面向端侧设计模型时,需要考虑模型大小、精度和计算量。
+- [支持鸿蒙](https://developer.huawei.com/consumer/cn/sdk/mindspore-lite-kit)
- MindSpore Lite团队提供了一系列预训练模型,用于解决图像分类、目标检测等场景的学习问题。可以在您的应用程序中使用这些预训练模型对应的终端模型。
+- [训练后量化](https://www.mindspore.cn/lite/docs/zh-CN/master/advanced/quantization.html)
- MindSpore提供的预训练模型:[图像分类(Image Classification)](https://download.mindspore.cn/model_zoo/official/lite/)。后续MindSpore团队会增加更多的预置模型。
+- [轻量化Micro推理部署](https://www.mindspore.cn/lite/docs/zh-CN/master/advanced/micro.html#模型推理代码生成)
- MindSpore允许您重新训练预训练模型,以执行其他任务。比如:使用预训练的图像分类模型,可以重新训练来识别新的图像类型。
+- [基准调试工具](https://www.mindspore.cn/lite/docs/zh-CN/master/tools/benchmark.html)
-2. 模型转换/优化
+## 交流与反馈
- 如果您使用MindSpore或第三方训练的模型,需要使用[MindSpore Lite模型转换工具](https://www.mindspore.cn/lite/docs/zh-CN/master/converter/converter_tool.html)转换成MindSpore Lite模型格式。MindSpore Lite模型转换工具不仅提供了将TensorFlow Lite、Caffe、ONNX等模型格式转换为MindSpore Lite模型格式,还提供了算子融合、量化等功能。
+- 欢迎您通过[Gitee Issues](https://gitee.com/mindspore/mindspore-lite/issues)来提交问题、报告与建议。
- MindSpore Lite还提供了将IoT设备上运行的模型转换成.C代码的生成工具。
+- 欢迎您通过[社区论坛](https://discuss.mindspore.cn/c/mindspore-lite/38)进行技术、问题交流。
- 经过上述两个部署,您已经得到端侧可以部署的模型。
+- 欢迎您通过[Sig](https://www.mindspore.cn/sig/MindSpore%20Lite)来管理和改善工作流程,参与讨论交流。
-3. 模型部署
+## 周边社区
- 这个阶段主要实现模型部署,包括模型管理、部署和运维监控等。
+- [MindSpore](https://gitee.com/mindspore/mindspore)
-4. 模型推理
+- [MindOne](https://github.com/mindspore-lab/mindone)
- 主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/docs/zh-CN/master/infer/runtime_cpp.html)是通过模型运行输入数据,获取预测的过程。
+- [Mindyolo](https://github.com/mindspore-lab/mindyolo)
- MindSpore Lite提供了预训练模型部署在智能终端的[样例](https://www.mindspore.cn/lite/examples)。
+- [OpenHarmony](https://gitcode.com/openharmony/third_party_mindspore)
diff --git a/docs/screenshot_001.png b/docs/screenshot_001.png
new file mode 100644
index 0000000000000000000000000000000000000000..6e42ed93a0501159424d9fae2c97e933c5d3db02
Binary files /dev/null and b/docs/screenshot_001.png differ
diff --git a/docs/screenshot_002.png b/docs/screenshot_002.png
new file mode 100644
index 0000000000000000000000000000000000000000..d78f4bdc3fe81a111e7fff44f98d9e241b2d6cc3
Binary files /dev/null and b/docs/screenshot_002.png differ
diff --git a/docs/screenshot_003.png b/docs/screenshot_003.png
new file mode 100644
index 0000000000000000000000000000000000000000..b5dcd1022da84ac36f4765f08e5d011c44447a33
Binary files /dev/null and b/docs/screenshot_003.png differ
diff --git a/docs/screenshot_004.png b/docs/screenshot_004.png
new file mode 100644
index 0000000000000000000000000000000000000000..4698da3b3c049561d7c6a1a5bf5a5c7af661ffb1
Binary files /dev/null and b/docs/screenshot_004.png differ