• Gladaed@feddit.org
      link
      fedilink
      English
      arrow-up
      33
      ·
      3 days ago

      The simplest neural network (simplified). You input a set of properties(first column). Then you weightedly add all of them a number of times(with DIFFERENT weights)(first set of lines). Then you apply a non-linearity to it, e.g. 0 if negative, keep the same otherwise(not shown).

      You repeat this with potentially different numbers of outputs any number of times.

      Then do this again, but so that your number of outputs is the dimension of your desired output. E.g. 2 if you want the sum of the inputs and their product computed(which is a fun exercise!). You may want to skip the non-linearity here or do something special™

      • Poik@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Simplest multilayer perceptron*.

        A neural network can be made with only one hidden layer (and still, mathematically proven, be able to output any possible function result, just not as easily trained, and with a much higher number of neurons).

        • Gladaed@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          The one shown is actually single layer. Input, FC hidden layer, output. Edit: can’t count to fucking two, can I now. You are right.

          • Poik@pawb.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 days ago

            It’s good. Thanks for correcting yourself. :3

            The graphs struck me as weird when learning as I expected the input and output nodes to be neuron layers as well… Which they are, but not in the same way. So I frequently miscounted myself while learning, sleep deprived in the back of the classroom. ^^;;

    • Zwiebel@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      To elaborate: the dots are the simulated neurons, the lines the links between neurons. The pictured neural net has four inputs (on the left) leading to the first layer, where each neuron makes a decision based on the input it recieves and a predefined threshold, and then passes its answer on to the second layer, which then connects to the two outputs on the right