• 1 Post
  • 27 Comments
Joined 1 year ago
cake
Cake day: March 19th, 2024

help-circle










  • AmbiguousProps@lemmy.todaytoLemmy Shitpost@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    7 days ago

    It won’t be an improvement, just another way for people to fall in line and not think for themselves. LLMs don’t know anything - they can (and do) confidently tell users something very incorrect, and only correct themselves after a user points it out. What about the users who don’t push back and just trust what the slop machine is telling them?









  • His mood shifted the next day when he found Replit “was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.”

    LLMs cannot intentionally “lie” or be “deceptive”, they aren’t alive. They can, however, be confidently wrong or incorrect (and most often, they are).

    “I know vibe coding is fluid and new, and yes, despite Replit itself telling me rolling back wouldn’t work here – it did. But you can’t overwrite a production database. And you can’t not separate preview and staging and production cleanly. You just can’t”

    Maybe on this service, you can’t, but if you used this to vibe code something that wasn’t hosted there? You absolutely fucking can.

    “The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.”

    lol. lmao, even.