# Tensorflow_Keras_MNIST_Test **Repository Path**: edwinjiang703/Tensorflow_Keras_MNIST_Test ## Basic Information - **Project Name**: Tensorflow_Keras_MNIST_Test - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2018-07-01 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Tensorflow_Keras_MNIST_Test #### 项目介绍 第一个例子:“Conv2d_manual.ipynb”
手工实现一个前向卷积操作:
首先需要确定需要增加的行、列数目. SAME的方式就是如果卷积核按照步长进行操作时,如果遇到边界不能对齐时,会进行补0的操作。所以这里粗略用输入的维度去除步长,然后取余数,再用卷积核做差得到需要填补0的行、列的数目。
然后生成一个全0的新张量,宽和高为原来的宽、高加上需要增加的行、列的数量,再把原始的输入张量填充进去,模拟前向的卷积操作。 第二个例子:“L2_1_1_convolutional_nn_keras.ipynb”
用不同的参数来进行tensorflow的在数据集上的训练,通过不同的参数设置,观察训练结果,总结如下:
1 在刚开始训练时,卷积核池化大小及数量设置小一些,比如3*3,64个。用Relu作为激活函数,设置小一点的学习率,使用Adam算法,这样先跑一次训练,看看结果,一般这样的设置效果还可以,然后再次基础上进行逐步的调整参数。
2 卷机核数量越多、卷积尺寸与池化尺寸越大的时候,训练的时间会增大
3 针对这个数据集,调整正则的权重参数,似乎没有啥影响
4 Adam确实要比GradientDescentOptimizer算法效率高
##### 1 设置卷积核大小为3*3,数量分别为32、64.池化为2*2,学习率为0.01。激活函数是relu。用Adam算法,在地300次是,accrucy达到0.99.结果如下:
step 100, entropy loss: 0.109056, l2_loss: 1371.652954, total loss: 0.205072 accuracy:0.97
step 200, entropy loss: 0.141164, l2_loss: 966.308411, total loss: 0.208806 accuracy:0.96
step 300, entropy loss: 0.037941, l2_loss: 894.731750, total loss: 0.100572 accuracy:0.99
step 400, entropy loss: 0.039140, l2_loss: 880.414673, total loss: 0.100769 accuracy:1.0
step 500, entropy loss: 0.059169, l2_loss: 865.508728, total loss: 0.119755 accuracy:0.99
step 600, entropy loss: 0.037221, l2_loss: 843.533569, total loss: 0.096268 accuracy:0.99
step 700, entropy loss: 0.056091, l2_loss: 866.858276, total loss: 0.116771 accuracy:0.99
step 800, entropy loss: 0.054191, l2_loss: 905.898926, total loss: 0.117604 accuracy:1.0
##### 2 设置卷积核大小为2*2,数量分别为32、64.池化为2*2,学习率为0.001。激活函数是relu。用Adam算法。在第100次时,accuracy就已达到0.98 这个accuracy的表现比第一次的参数设置好一些
step 100, entropy loss: 0.163552, l2_loss: 639.138245, total loss: 0.208292 accuracy:0.98
step 200, entropy loss: 0.095588, l2_loss: 611.325317, total loss: 0.138381 accuracy:0.98
step 300, entropy loss: 0.032762, l2_loss: 589.939392, total loss: 0.074058 accuracy:1.0
step 400, entropy loss: 0.133827, l2_loss: 579.362488, total loss: 0.174382 accuracy:0.98
step 500, entropy loss: 0.057785, l2_loss: 572.366394, total loss: 0.09785 accuracy:0.99
step 600, entropy loss: 0.014195, l2_loss: 559.957031, total loss: 0.053392 accuracy:1.0
step 700, entropy loss: 0.018136, l2_loss: 546.154602, total loss: 0.056367 accuracy:1.0
##### 3 设置卷积核大小为4*4,数量分别为64、128.池化为4*4,但是池化的strides为10*10,学习率为0.001。激活函数是relu。用Adam算法。accuracy的表现不好。池化的strides设置大。感觉没有把前面卷机操作得到的信息全部提取到,训练精度会下降。同时训练时间比之前要长一些。
step 100, entropy loss: 0.566385, l2_loss: 186.996628, total loss: 0.579475 accuracy:0.82
step 200, entropy loss: 0.293314, l2_loss: 204.239655, total loss: 0.307611 accuracy:0.93
step 300, entropy loss: 0.452982, l2_loss: 215.264740, total loss: 0.468051 accuracy:0.86
step 400, entropy loss: 0.282853, l2_loss: 226.158127, total loss: 0.298684 accuracy:0.93
step 500, entropy loss: 0.282417, l2_loss: 236.620087, total loss: 0.298981 accuracy:0.92
step 600, entropy loss: 0.242230, l2_loss: 245.945801, total loss: 0.259447 accuracy:0.94
##### 4 当把正则项的权重参数调大,从7e-5改为3e-3,积核大小为3*3,数量分别为32、64.池化为2*2,学习率为0.001。激活函数是relu。用Adam算法 感觉与第一种情况accuracy差不多。
step 100, entropy loss: 0.136291, l2_loss: 102.963348, total loss: 0.445181 accuracy:0.98
step 200, entropy loss: 0.172736, l2_loss: 66.793083, total loss: 0.373115 accuracy:0.96
step 300, entropy loss: 0.064189, l2_loss: 61.409866, total loss: 0.248418 accuracy:0.99
step 400, entropy loss: 0.064903, l2_loss: 57.678463, total loss: 0.237938 accuracy:0.99
step 500, entropy loss: 0.162079, l2_loss: 50.004963, total loss: 0.312093 accuracy:0.96
step 600, entropy loss: 0.112546, l2_loss: 49.293453, total loss: 0.260427 accuracy:0.99
step 700, entropy loss: 0.087496, l2_loss: 49.059624, total loss: 0.234675 accuracy:0.99
step 800, entropy loss: 0.075206, l2_loss: 46.665062, total loss: 0.215201 accuracy:1.0