• 2 Posts
  • 116 Comments
Joined 1 year ago
cake
Cake day: September 2nd, 2024

help-circle
  • I disagree that writing by hand is magically improving information absorbtion/retention. Source: I’ve been doing it through all of my school and all of my uni. Being half-asleep, pondering something completely irrelevant, and in general course material flying completely over my head while I write it down was a norm most of the time. And lecturers dictating their stuff at high speeds didn’t help either. Maybe there is some temporary novelty effect after you switch from one way of writing to another, but I wouldn’t expect that last long.




  • That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.

    They’re good at that because they are ANNs.

    In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.

    Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.

    Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.

    This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.

    My experience is opposite.




  • It depends. If it’s difficult to maintain because it’s some terrible careless spaghetti written by person who didn’t care enough, then it’s definitely not a sign of intelligence or power level. But if it’s difficult to maintain because the rest of the team can’t wrap their head around type-level metaprogramming or edsl you came up with, then it’s a different case.




  • hisao@ani.socialtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    9 days ago

    Okay, to be fair, my knowledge of the current culture in industry is very limited. It’s mostly impression formed by online conversations, not limited to Lemmy. Last project I worked at it was illegal to use public LLMs because of intellectual property (and maybe even GDPR) concerns. We had a local scope-limited LLM integration though and that one was allowed, but there was literally a single person across multiple departments who used it and it was a “middle” frontend dev and it was only for autocomplete. Backenders wouldn’t even consider it.


  • You’re right of course and engineering as a whole is a first-line subject to AI. Everything that has strict specs, standards, invariants will benefit massively from it, and conforming is what AI inherently excels at, as opposed to humans. Those complaints like the one this subthread started with are usually people being bad at writing requirements rather than AI being bad at following them. If you approach requirements like in actual engineering fields, you will get corresponding results, while humans will struggle to fully conform or even try to find tricks and loopholes in your requirements to sidestep them and assert their will while technically still remaining in “barely legal” territory.


  • I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”

    Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”. This is btw one of the reasons why I increasingly dislike programming as a field over the years and happily delegate the coding part to AI nowadays. This field celebrates conformism and that’s why humans shouldn’t write code manually. Perfect field to automate away via LLMs.


  • If my coworkers do, they’re very quiet about it.

    Gee, guess why. Given the current culture of hate and ostracism I would never outright say IRL that I like it or use it a lot. I would say something like “yeah, I think it can sometimes be useful when used carefully and I sometimes use it too”. While in reality it would mean that it actually writes 95% of code under my micromanagement.


  • deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore

    And that’s exactly what I want. And I don’t get it why people want more. Having more means you have less and less control or influence on the result. What I want is that in other fields it becomes like it is in programming now, so that you micromanage every step and have great control over the result.






  • I beat Diablo 1 recently which I already written about in another thread. After that I decided to try Atlyss. Too early to give proper feedback, but I must say I enjoy it so far even though I find it a bit strange to have MMORPG gameplay loop in a singleplayer game. It does have lobby-based multiplayer though, and I haven’t tried that yet. Art direction in general is great, but models/textures are extremely lowpoly/lofi (with texture filtering on top of that, going for faithful PSX look I guess). Characters in particular look great, and there is a full-featured furry character creator.


  • Okay, I see. So killswitch implementation might be non-perfect, depending on VPN. And there are reports of Surfshark leaking IP when torrenting under killswitch. I guess they might not have “permanent killswitch” option like ProtonVPN and this is why it happened. So basically if torrent app launches before VPN or VPN closes before torrent app killswitch might get turned off together with VPN app and some traffic might leak. This is impossible under “permanent killswitch”. So to rely on killswitch I guess the first thing to check is if internet is accessible after closing VPN client app. If not, then it’s a good killswitch. But with qBittorrent it’s always a good idea to use that setting for extra safety. It might not be present in most other torrent apps though, and to do the same manually using iptables or whatever might be tricky and error-prone.