Score
0
Watch 3 Star 3 Fork 0

yuanbo-peng / Neural-NetworksMatlabGPL-3.0

Join us
Explore and code with more than 2 million developers,Free private repositories !:)
Sign up
The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi-layer perceptron (MLP) 4-2-4 encoder. spread retract

Clone or download
Cancel
Notice: Creating folder will generate an empty file .keep, because not support in Git
Loading...
README.md

Neural Networks

The EBP Training Algorithm for an MLP Encoder

The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi- layer perceptron (MLP) 4-2-4 encoder using MatLab. Intuitively the structure of the encoder is as shown below:

  • An input layer with 4 units.
  • A single hidden layer with 2 units.
  • An output layer with 4 units.

Each unit has a sigmoid activation function. The task of the encoder is to map the following inputs onto outputs:

Input Pattern Output Pattern
1, 0, 0, 0 1, 0, 0, 0
0, 1, 0, 0 0, 1, 0, 0
0, 0, 1, 0 0, 0, 1, 0
0, 0, 0, 1 0, 0, 0, 1

Activation Functions

Activation functions are used for a neural network to learn and make sense of some data complicated and Non-linear complex functional mappings between the inputs and response variables. There are several commonly used activation functions to fit different data types better, such as Sigmoid, Tanh, and ReLu etc. In this case, the sigmoid function would be applied.

Total Error Calculation

A training set consists of

  • A set of input vectors 𝑖1, ..., 𝑖N, where the dimension of 𝑖n is equal to the number of MLP input units.
  • For each 𝑛, a target vector 𝑡n, where the dimension of 𝑡n is equal to the number of output units.

The error 𝐸 is defined by:

Weights Modification

Let the weights between input and hidden layer, hidden and output layer be two sets of matrices 𝑊1, 𝑊2. The size of these two matrices are 4 × 2, 2 × 4. The values in these two matrices are automatically generated. Each value in 𝑊2 and 𝑊1 needs to be updated after each iteration of forward propagation.

Update W2 (the weights between hidden and output layer)

The new weights between hidden and output layer are calculated by:

Update W1 (the weights between input and hidden layer)

The new weights between input and hidden layer are calculated by:

An Improved EBP Training Algorithm

Bias is a constant which helps the model in a way that it can fit better for the given data. A bias unit is an ‘extra’ neuron which doesn’t have any incoming connections added to pre-output layer.

Evaluation: Bias vs Non-bias

The MLP parameters are below:

  • Learning rate: 6.0
  • Number of iterations: 1000
  • Initial weights in two systems are equal

Comments ( 0 )

Sign in for post a comment

Matlab
1
https://gitee.com/yuanbo-peng/Neural-Networks.git
git@gitee.com:yuanbo-peng/Neural-Networks.git
yuanbo-peng
Neural-Networks
Neural-Networks
master

Help Search

191139_cd20d5fd_5186603 191143_ebef6f8d_5186603