# Week8-densenet **Repository Path**: liam1030/Week8-densenet ## Basic Information - **Project Name**: Week8-densenet - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2018-11-06 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README #尝试DenseNet-121在cifarnet-10上的效果: 运行截图见附件 INFO:tensorflow:Evaluation [20/200] INFO:tensorflow:Evaluation [40/200] INFO:tensorflow:Evaluation [60/200] INFO:tensorflow:Evaluation [80/200] INFO:tensorflow:Evaluation [100/200] INFO:tensorflow:Evaluation [120/200] INFO:tensorflow:Evaluation [140/200] INFO:tensorflow:Evaluation [160/200] INFO:tensorflow:Evaluation [180/200] INFO:tensorflow:Evaluation [200/200] **eval/Recall_5[0.9796] eval/Accuracy[0.7069]** 1.问题1:初始代码中每个Bottleneck layer在论文的BN-relu-conv的基础上都加上了dropout,是否会丢失过多信息? 2.对growth的理解: 第l层的输入有k0 + k × (l − 1)个特征图,k为growth rate,代表在第l层将k个特征图从第l-1层引入. 3.对稠密链接的理解:每一层都接收前面所有层的输出,在L层就有L(L-1) / 2个connection,所以每一层输出的特征图数量k比较小就可以,大大减少了参数数量,而且每一层的特征都得到了充分的利用。