A neural network can be made with only one hidden layer (and still, mathematically proven, be able to output any possible function result, just not as easily trained, and with a much higher number of neurons).
The graphs struck me as weird when learning as I expected the input and output nodes to be neuron layers as well… Which they are, but not in the same way. So I frequently miscounted myself while learning, sleep deprived in the back of the classroom. ^^;;
Simplest multilayer perceptron*.
A neural network can be made with only one hidden layer (and still, mathematically proven, be able to output any possible function result, just not as easily trained, and with a much higher number of neurons).
The one shown is actually single layer. Input, FC hidden layer, output. Edit: can’t count to fucking two, can I now. You are right.
It’s good. Thanks for correcting yourself. :3
The graphs struck me as weird when learning as I expected the input and output nodes to be neuron layers as well… Which they are, but not in the same way. So I frequently miscounted myself while learning, sleep deprived in the back of the classroom. ^^;;