• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    Hell, I’ve been training models and using ML directly for a decade and I barely know what’s going on in there.

    Outside of low dimensional toy models, I don’t think we’re capable of understanding what’s happening. Even in academia, work on the ability to reliably understand trained networks is still in its infancy.

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I remember studying “Probably Approximately Correct” learning and such, and it was a pretty cool way of building axioms, theorems, and proofs to bound and reason about ML models. To my knowledge, there isn’t really anything like it for large networks; maybe someday.

      • Poik@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        … 1957

        Perceptrons. The math dates back to the 40s, but '57 marks the first artificial neural network.

        Also 35 years is infancy in science, or at least teenage, as we see from deep learning’s growing pains right now. Visualizations of neural network responses and reverse engineering neural networks to understand how they tick predate 2010 at least. Deep Dream was actually built off an idea of network inversion visualizations, and that’s ten years old now.