

Firewall redirect and masquerade.
Bitch you thought
Firewall redirect and masquerade.
Bitch you thought
“You mean if I delete data, then it’s gone? No matter what platform?”
This is the fundamental problem with LLMs and all the hype.
People with technology experience can understand the limitations of the tech, and will be more skeptical of the output from them.
But your average person?
If they go to Google and ask if vaccines cause autism, and the Google’s AI search slop trough contains an answer they like, accurate or not there will be exactly no second guessing. I mean, this is supposed to be a PhD level person, and it was right about the other softball questions they asked, like what color is the sky. Surely they’re right about that too, right?
It’s easy to post on a forum and say so.
Maybe you even are actually asking AI questions and researching whether or not it’s accurate.
Perhaps you really are the world’s most perfect person.
But even if that’s true, which I very seriously doubt, then you’re going to be the extreme minority. People will ask AI a question, and if they like the answers given, they’ll look no further. If they don’t like the answers given, they’ll ask the AI with different wording until they get the answer they want.
It’s a single data data point, nothing more, nothing less. But that single data point is evidence of using LLMs in their code generation.
Time will tell if this is a molehill or a mountain. When it comes to data privacy, given that it just takes one mistake and my data can be compromised, I’m going to be picky about who I park my data with.
I’m not necessarily immediately looking to jump ship, but I consider it a red flag that they’re using developer tools centered around using AI to generate code.
There it is. The bold-faced lie.
“I don’t blindly trust AI, I just ask it to summarize something, read the output, then read the source article too. Just to be sure the AI summarized it properly.”
Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.
If you want to trade accuracy for speed, that’s your prerogative.
AI has its uses. Transcribing subtitles, searching images by description, things like that. But too many times, I’ve seen AI summaries that, if you read the article the AI cited, it can be flatly wrong on things.
What’s the point of a summary that doesn’t actually summarize the facts accurately?
Sure, but with all the mistakes I see LLMs making in places where professionals should be quality checking their work (lawyers, judges, internal company email summaries, etc) it gives me pause considering this is a privacy and security focused company.
It’s one thing for AI to hallucinate cases, and another entirely to forget there’s a difference between =
and ==
when the AI bulk generates code. One slip up and my security and privacy could be compromised.
You’re welcome to buy in to the AI hype. I remember the dot com bubble.
There’s been evidence in their github repo that they’re using LLMs to code their tools now.
It’s making me reconsider using them.
I feel like this is missing a big point of the article.
The vulnerability that the xz backdoor attempt revealed was the developers. The elephant in the room is that for someone capable of writing and maintaining a program so important to modern technical infrastructure, we’re making sure to hang them out to dry. When they burn out because their ‘hobby’ becomes too emotionally draining (either because of a campaign to wear them down intentionally or fully naturally) someone will be waiting to take control. Who can you trust? Here, we see someone attempted (and nearly succeeded) a multi-year effort to establish themselves as a trusted member of the development community who was faking it all along. With the advent of LLMs, it’s going to be even harder to tell if someone is trustworthy, or just a long-running LLM deception campaign.
Maybe, we should treat the people we rely on for these tools a little better for how much they contribute to modern tech infrastructure?
And I’ll point out that’s less aimed at the individuals who use tech, and more at the multi-billion-dollar multi-national tech companies that make money hand over fist using the work others donate.