Figure 1.
A two-layer feed-forward Multilayer Perceptron (MLP). The lines represent the two layers of free parameters in the network, represented by the weights |$w_{ij}^{(1)}$| and |$w_{jk}^{(2)}$| and biases |$b_{j}^{(1)}$| and |$b_{k}^{(2)}$|. The I input neurons xi feed into the J hidden neurons hj, which form the input to the K output units zk. An additional input of value 1 feeds into the hidden and output layer, which is associated with the biases. Information flows only from the input to the output neurons (feed-forward).