Figure 1.
A two-layer feed-forward Multilayer Perceptron (MLP). The lines represent the two layers of free parameters in the network, represented by the weights $w_{ij}^{(1)}$ and $w_{jk}^{(2)}$ and biases $b_{j}^{(1)}$ and $b_{k}^{(2)}$. The I input neurons xi feed into the J hidden neurons hj, which form the input to the K output units zk. An additional input of value 1 feeds into the hidden and output layer, which is associated with the biases. Information flows only from the input to the output neurons (feed-forward).

A two-layer feed-forward Multilayer Perceptron (MLP). The lines represent the two layers of free parameters in the network, represented by the weights |$w_{ij}^{(1)}$| and |$w_{jk}^{(2)}$| and biases |$b_{j}^{(1)}$| and |$b_{k}^{(2)}$|⁠. The I input neurons xi feed into the J hidden neurons hj, which form the input to the K output units zk. An additional input of value 1 feeds into the hidden and output layer, which is associated with the biases. Information flows only from the input to the output neurons (feed-forward).

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close