# torchTutorial **Repository Path**: diycp2015/torch-tutorial ## Basic Information - **Project Name**: torchTutorial - **Description**: torch 学习教程&&代码练习 - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2025-03-02 - **Last Updated**: 2025-03-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README #### conv2d - 参数 ```python Args: in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: 0 padding_mode (str, optional): ``'zeros'``, ``'reflect'``, ``'replicate'`` or ``'circular'``. Default: ``'zeros'`` dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If ``True``, adds a learnable bias to the output. Default: ``True`` """.format(**reproducibility_notes, **convolution_notes) + r""" ``` - 输入输出 shape: Input: $(N, C_{in}, H_{in}, W_{in})$ or $(C_{in}, H_{in}, W_{in})$ Output:$(N, C_{out}, H_{out}, W_{out})$ or $(C_{out}, H_{out}, W_{out})$ $$ H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor $$ $$ W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor $$ - Attributes **weight (Tensor):** the learnable weights of the module of shape The values of these weights are sampled from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ where $k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}$ **bias (Tensor):** the learnable bias of the module of shape (out_channels). If attr:`bias` is ``True``, then the values of these weights are sampled from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ where $ k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}$ - Examples ```python >>> # With square kernels and equal stride >>> m = nn.Conv2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = torch.randn(20, 16, 50, 100) >>> output = m(input) ```

#### Dropout2d - 参数 ```python Args: p (float, optional): probability of an element to be zero-ed. inplace (bool, optional): If set to ``True``, will do this operation in-place ``` - 输入输出 Shape: Input: $(N, C, H, W)$ or $(N, C, L)$ Output: $(N, C, H, W)$ or $(N, C, L)$ (same shape as input). - Examples ```python Examples:: >>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input) ```

#### Linear - 参数 ```python Args: in_features: size of each input sample out_features: size of each output sample bias: If set to ``False``, the layer will not learn an additive bias. Default: ``True`` ``` - 输入输出 Shape: Input: $(*, H_{in})$ where $*$ means any number of dimensions including **none** and $H_{in} = \text{in\_features}$ output: $(*, H_{out})$ where all but the last dimension are the same shape as the input and $H_{out} = \text{out\_features}$ - Attributes **weight:** the learnable weights of the module of shape $(\text{out\_features}, \text{in\_features})$ . The values are initialized from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ , where $k = \frac{1}{\text{in\_features}}$ **bias:** the learnable bias of the module of shape :$(\text{out\_features})$. If :attr:`bias` is ``True``, the values are initialized from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ , where $k = \frac{1}{\text{in\_features}}$ - Examples ```python >>> m = nn.Linear(10, 2) >>> input = torch.randn(128, 10) >>> output = m(input) >>> print(output.size()) torch.Size([10, 2]) ```