A bot might, but this post is pointing out how common it is for people who consider themselves AI experts to not recognize this diagram that is basically part of AI 101
The simplest neural network (simplified).
You input a set of properties(first column). Then you weightedly add all of them a number of times(with DIFFERENT weights)(first set of lines). Then you apply a non-linearity to it, e.g. 0 if negative, keep the same otherwise(not shown).
You repeat this with potentially different numbers of outputs any number of times.
Then do this again, but so that your number of outputs is the dimension of your desired output. E.g. 2 if you want the sum of the inputs and their product computed(which is a fun exercise!). You may want to skip the non-linearity here or do something special™
A neural network can be made with only one hidden layer (and still, mathematically proven, be able to output any possible function result, just not as easily trained, and with a much higher number of neurons).
The graphs struck me as weird when learning as I expected the input and output nodes to be neuron layers as well… Which they are, but not in the same way. So I frequently miscounted myself while learning, sleep deprived in the back of the classroom. ^^;;
To elaborate: the dots are the simulated neurons, the lines the links between neurons. The pictured neural net has four inputs (on the left) leading to the first layer, where each neuron makes a decision based on the input it recieves and a predefined threshold, and then passes its answer on to the second layer, which then connects to the two outputs on the right
OK but what actually is this image?
Basic model of a neural net. The post is implying that you’re arguing with bots.
https://en.wikipedia.org/wiki/Neural_network_(machine_learning)
Wouldn’t a bot recognize this though?
A bot might, but this post is pointing out how common it is for people who consider themselves AI experts to not recognize this diagram that is basically part of AI 101
They’re not saying that the bots are asking what the image is, but users (may be bots or not) that sell themselves as AI/ML experts.
https://youtu.be/7YqEZrP5t1g
Would you recognize if someone made a block diagram of your brain?
Maybe, but it also might be suggesting that people are not fundamentally different.
Illustration of a neural network.
The simplest neural network (simplified). You input a set of properties(first column). Then you weightedly add all of them a number of times(with DIFFERENT weights)(first set of lines). Then you apply a non-linearity to it, e.g. 0 if negative, keep the same otherwise(not shown).
You repeat this with potentially different numbers of outputs any number of times.
Then do this again, but so that your number of outputs is the dimension of your desired output. E.g. 2 if you want the sum of the inputs and their product computed(which is a fun exercise!). You may want to skip the non-linearity here or do something special™
Simplest multilayer perceptron*.
A neural network can be made with only one hidden layer (and still, mathematically proven, be able to output any possible function result, just not as easily trained, and with a much higher number of neurons).
The one shown is actually single layer. Input, FC hidden layer, output. Edit: can’t count to fucking two, can I now. You are right.
It’s good. Thanks for correcting yourself. :3
The graphs struck me as weird when learning as I expected the input and output nodes to be neuron layers as well… Which they are, but not in the same way. So I frequently miscounted myself while learning, sleep deprived in the back of the classroom. ^^;;
Multilayer perceptron
To elaborate: the dots are the simulated neurons, the lines the links between neurons. The pictured neural net has four inputs (on the left) leading to the first layer, where each neuron makes a decision based on the input it recieves and a predefined threshold, and then passes its answer on to the second layer, which then connects to the two outputs on the right
Logic.
Many player cat’s cradle