diff --git a/README.md b/README.md
index 78f6fd41e7dd8d41c0064271849c65b0abbdd2d6..4cdffba16d772b68a0b361a152705e6898a9c53b 100644
--- a/README.md
+++ b/README.md
@@ -1,124 +1,122 @@
# EfficientNet PyTorch
-### Quickstart
+### 快速开始
-Install with `pip install efficientnet_pytorch` and load a pretrained EfficientNet with:
+使用 `pip install efficientnet_pytorch` 安装,并通过以下代码加载预训练的EfficientNet:
```python
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b0')
```
-### Updates
+### 更新日志
-#### Update (April 2, 2021)
+#### 更新 (2021年4月2日)
-The [EfficientNetV2 paper](https://arxiv.org/abs/2104.00298) has been released! I am working on implementing it as you read this :)
+[EfficientNetV2论文](https://arxiv.org/abs/2104.00298)已经发布!我正在实现它。
-About EfficientNetV2:
-> EfficientNetV2 is a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop this family of models, we use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency. The models were searched from the search space enriched with new ops such as Fused-MBConv.
+关于EfficientNetV2:
+> EfficientNetV2是一系列新的卷积网络,其训练速度更快,参数效率更高。为了开发这一系列模型,我们使用了结合训练感知神经架构搜索和缩放的方法,共同优化训练速度和参数效率。模型是从包含新操作(如Fused-MBConv)的搜索空间中搜索得到的。
-Here is a comparison:
+以下是对比:
>
+#### 更新 (2020年8月25日)
-#### Update (Aug 25, 2020)
+此次更新添加了:
+ * 新的 `include_top`(默认:`True`)选项 ([#208](https://github.com/lukemelas/EfficientNet-PyTorch/pull/208))
+ * 使用[sotabench](https://sotabench.com/)进行连续测试
+ * 代码质量改进和修复 ([#215](https://github.com/lukemelas/EfficientNet-PyTorch/pull/215) [#223](https://github.com/lukemelas/EfficientNet-PyTorch/pull/223))
-This update adds:
- * A new `include_top` (default: `True`) option ([#208](https://github.com/lukemelas/EfficientNet-PyTorch/pull/208))
- * Continuous testing with [sotabench](https://sotabench.com/)
- * Code quality improvements and fixes ([#215](https://github.com/lukemelas/EfficientNet-PyTorch/pull/215) [#223](https://github.com/lukemelas/EfficientNet-PyTorch/pull/223))
+#### 更新 (2020年5月14日)
-#### Update (May 14, 2020)
+此次更新增加了全面的注释和文档(感谢@workingcoder)。
-This update adds comprehensive comments and documentation (thanks to @workingcoder).
+#### 更新 (2020年1月23日)
-#### Update (January 23, 2020)
-
-This update adds a new category of pre-trained model based on adversarial training, called _advprop_. It is important to note that the preprocessing required for the advprop pretrained models is slightly different from normal ImageNet preprocessing. As a result, by default, advprop models are not used. To load a model with advprop, use:
+此次更新添加了一种基于对抗训练的新的预训练模型类别,称为_advprop_。需要注意的是,advprop预训练模型所需的预处理与常规ImageNet预处理略有不同。因此,默认情况下不使用advprop模型。要加载带有advprop的模型,请使用:
```python
model = EfficientNet.from_pretrained("efficientnet-b0", advprop=True)
```
-There is also a new, large `efficientnet-b8` pretrained model that is only available in advprop form. When using these models, replace ImageNet preprocessing code as follows:
+还有一个新的、大型的`efficientnet-b8`预训练模型,仅以advprop形式提供。使用这些模型时,请按如下方式替换ImageNet预处理代码:
```python
-if advprop: # for models using advprop pretrained weights
+if advprop: # 对于使用advprop预训练权重的模型
normalize = transforms.Lambda(lambda img: img * 2.0 - 1.0)
else:
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
```
-This update also addresses multiple other issues ([#115](https://github.com/lukemelas/EfficientNet-PyTorch/issues/115), [#128](https://github.com/lukemelas/EfficientNet-PyTorch/issues/128)).
+此次更新还解决了多个其他问题 ([#115](https://github.com/lukemelas/EfficientNet-PyTorch/issues/115), [#128](https://github.com/lukemelas/EfficientNet-PyTorch/issues/128))。
-#### Update (October 15, 2019)
+#### 更新 (2019年10月15日)
-This update allows you to choose whether to use a memory-efficient Swish activation. The memory-efficient version is chosen by default, but it cannot be used when exporting using PyTorch JIT. For this purpose, we have also included a standard (export-friendly) swish activation function. To switch to the export-friendly version, simply call `model.set_swish(memory_efficient=False)` after loading your desired model. This update addresses issues [#88](https://github.com/lukemelas/EfficientNet-PyTorch/pull/88) and [#89](https://github.com/lukemelas/EfficientNet-PyTorch/pull/89).
+此次更新允许您选择是否使用内存高效的Swish激活。默认情况下选择内存高效版本,但不能在使用PyTorch JIT导出时使用。为此,我们还包含了一个标准(导出友好型)的Swish激活函数。要在加载所需模型后切换到导出友好型版本,只需调用 `model.set_swish(memory_efficient=False)`。此次更新解决了问题 [#88](https://github.com/lukemelas/EfficientNet-PyTorch/pull/88) 和 [#89](https://github.com/lukemelas/EfficientNet-PyTorch/pull/89)。
-#### Update (October 12, 2019)
+#### 更新 (2019年10月12日)
-This update makes the Swish activation function more memory-efficient. It also addresses pull requests [#72](https://github.com/lukemelas/EfficientNet-PyTorch/pull/72), [#73](https://github.com/lukemelas/EfficientNet-PyTorch/pull/73), [#85](https://github.com/lukemelas/EfficientNet-PyTorch/pull/85), and [#86](https://github.com/lukemelas/EfficientNet-PyTorch/pull/86). Thanks to the authors of all the pull requests!
+此次更新使Swish激活函数更加内存高效。它还解决了拉取请求 [#72](https://github.com/lukemelas/EfficientNet-PyTorch/pull/72)、[#73](https://github.com/lukemelas/EfficientNet-PyTorch/pull/73)、[#85](https://github.com/lukemelas/EfficientNet-PyTorch/pull/85) 和 [#86](https://github.com/lukemelas/EfficientNet-PyTorch/pull/86)。感谢所有拉取请求的作者!
-#### Update (July 31, 2019)
+#### 更新 (2019年7月31日)
-_Upgrade the pip package with_ `pip install --upgrade efficientnet-pytorch`
+_使用_ `pip install --upgrade efficientnet-pytorch` _升级pip包_
-The B6 and B7 models are now available. Additionally, _all_ pretrained models have been updated to use AutoAugment preprocessing, which translates to better performance across the board. Usage is the same as before:
+B6和B7模型现已可用。此外,所有预训练模型都已更新为使用AutoAugment预处理,这意味着整体性能更好。使用方式与之前相同:
```python
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b7')
```
-#### Update (June 29, 2019)
+#### 更新 (2019年6月29日)
-This update adds easy model exporting ([#20](https://github.com/lukemelas/EfficientNet-PyTorch/issues/20)) and feature extraction ([#38](https://github.com/lukemelas/EfficientNet-PyTorch/issues/38)).
+此次更新添加了简单的模型导出 ([#20](https://github.com/lukemelas/EfficientNet-PyTorch/issues/20)) 和特征提取 ([#38](https://github.com/lukemelas/EfficientNet-PyTorch/issues/38))。
- * [Example: Export to ONNX](#example-export)
- * [Example: Extract features](#example-feature-extraction)
- * Also: fixed a CUDA/CPU bug ([#32](https://github.com/lukemelas/EfficientNet-PyTorch/issues/32))
+ * [示例:导出到ONNX](#example-export)
+ * [示例:提取特征](#example-feature-extraction)
+ * 还修复了一个CUDA/CPU错误 ([#32](https://github.com/lukemelas/EfficientNet-PyTorch/issues/32))
-It is also now incredibly simple to load a pretrained model with a new number of classes for transfer learning:
+现在也非常简单地可以使用新的类别数加载预训练模型进行迁移学习:
```python
model = EfficientNet.from_pretrained('efficientnet-b1', num_classes=23)
```
+#### 更新 (2019年6月23日)
-#### Update (June 23, 2019)
-
-The B4 and B5 models are now available. Their usage is identical to the other models:
+B4和B5模型现已可用。它们的使用方式与其它模型相同:
```python
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b4')
```
-### Overview
-This repository contains an op-for-op PyTorch reimplementation of [EfficientNet](https://arxiv.org/abs/1905.11946), along with pre-trained models and examples.
+### 概述
+本仓库包含[EfficientNet](https://arxiv.org/abs/1905.11946)的逐操作PyTorch重新实现,以及预训练模型和示例。
-The goal of this implementation is to be simple, highly extensible, and easy to integrate into your own projects. This implementation is a work in progress -- new features are currently being implemented.
+该实现的目标是简单、高度可扩展且易于集成到您自己的项目中。该实现仍在进行中——新功能正在不断实现。
-At the moment, you can easily:
- * Load pretrained EfficientNet models
- * Use EfficientNet models for classification or feature extraction
- * Evaluate EfficientNet models on ImageNet or your own images
+目前,您可以轻松:
+ * 加载预训练的EfficientNet模型
+ * 使用EfficientNet模型进行分类或特征提取
+ * 在ImageNet或您自己的图像上评估EfficientNet模型
-_Upcoming features_: In the next few days, you will be able to:
- * Train new models from scratch on ImageNet with a simple command
- * Quickly finetune an EfficientNet on your own dataset
- * Export EfficientNet models for production
+_即将推出的功能_:在接下来的几天里,您将能够:
+ * 使用简单命令在ImageNet上从头开始训练新模型
+ * 快速在您自己的数据集上微调EfficientNet
+ * 导出EfficientNet模型用于生产
-### Table of contents
-1. [About EfficientNet](#about-efficientnet)
-2. [About EfficientNet-PyTorch](#about-efficientnet-pytorch)
-3. [Installation](#installation)
-4. [Usage](#usage)
- * [Load pretrained models](#loading-pretrained-models)
- * [Example: Classify](#example-classification)
- * [Example: Extract features](#example-feature-extraction)
- * [Example: Export to ONNX](#example-export)
-6. [Contributing](#contributing)
+### 目录
+1. [关于EfficientNet](#关于-efficientnet)
+2. [关于EfficientNet-PyTorch](#关于-efficientnet-pytorch)
+3. [安装](#安装)
+4. [使用](#使用)
+ * [加载预训练模型](#加载预训练模型)
+ * [示例:分类](#示例-分类)
+ * [示例:特征提取](#示例-特征提取)
+ * [示例:导出到ONNX](#示例-导出)
+6. [贡献](#贡献)
-### About EfficientNet
+### 关于EfficientNet
-If you're new to EfficientNets, here is an explanation straight from the official TensorFlow implementation:
+如果您是EfficientNet的新手,这里是来自官方TensorFlow实现的解释:
-EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. We develop EfficientNets based on AutoML and Compound Scaling. In particular, we first use [AutoML Mobile framework](https://ai.googleblog.com/2018/08/mnasnet-towards-automating-design-of.html) to develop a mobile-size baseline network, named as EfficientNet-B0; Then, we use the compound scaling method to scale up this baseline to obtain EfficientNet-B1 to B7.
+EfficientNets是一系列图像分类模型,它们实现了最先进的准确性,同时比之前的模型小一个数量级且更快。我们基于AutoML和复合缩放开发了EfficientNets。具体来说,我们首先使用[AutoML Mobile框架](https://ai.googleblog.com/2018/08/mnasnet-towards-automating-design-of.html)开发了一个移动尺寸的基线网络,命名为EfficientNet-B0;然后,我们使用复合缩放方法扩展这个基线以获得EfficientNet-B1到B7。