The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi- layer perceptron (MLP) 4-2-4 encoder using MatLab. Intuitively the structure of the encoder is as shown below:
Each unit has a sigmoid activation function. The task of the encoder is to map the following inputs onto outputs:
|Input Pattern||Output Pattern|
|1, 0, 0, 0||1, 0, 0, 0|
|0, 1, 0, 0||0, 1, 0, 0|
|0, 0, 1, 0||0, 0, 1, 0|
|0, 0, 0, 1||0, 0, 0, 1|
Activation functions are used for a neural network to learn and make sense of some data complicated and Non-linear complex functional mappings between the inputs and response variables. There are several commonly used activation functions to fit different data types better, such as Sigmoid, Tanh, and ReLu etc. In this case, the sigmoid function would be applied.
A training set consists of
The error 𝐸 is defined by:
Let the weights between input and hidden layer, hidden and output layer be two sets of matrices 𝑊1, 𝑊2. The size of these two matrices are 4 × 2, 2 × 4. The values in these two matrices are automatically generated. Each value in 𝑊2 and 𝑊1 needs to be updated after each iteration of forward propagation.
The new weights between hidden and output layer are calculated by:
The new weights between input and hidden layer are calculated by:
Bias is a constant which helps the model in a way that it can fit better for the given data. A bias unit is an ‘extra’ neuron which doesn’t have any incoming connections added to pre-output layer.
The MLP parameters are below: