When I tried it in the past, I kinda didn’t take it seriously because everything was confined to its instance, but now, there’s full-featured global search and proper federation everywhere? Wow, I thought I heard there were some technical obstacles making it very unlikely, but now it’s just there and works great! I asked ChatGPT and it says this feature was added 5 years ago! Really? I’m not sure how I didn’t notice this sooner. Was it really there for so long? With flairs showing original instance where video comes from and everything?
Why do people bring this up every fucking time?
I asked Gemini, and my browser crashed, so, idk, man I guess it’s knowledge too powerful for human minds to contain.
“I used chatgpt”
Because they know it’s not accurate and explicitly mention it so you know where this information comes from.
Then why post it at all?
It makes idiots whine
Because they’d still like to know? it’s generally expected to do some research on your own before asking other people, and inform them of what you’ve already tried
AI seems to think it’s always right but in reality it is seldom correct.
Sounds like every human it’s been trained on
No, it sounds like a mindless statistics machine because that’s what it is. Even stupid people have reasons for saying and doing things.
Asking ChatGPT isn’t research.
ChatGPT is a moderately useful tertiary source. Quoting Wikipedia isn’t research, but using Wikipedia to find primary sources and reading those is a good faith effort. Likewise, asking ChatGPT in and of itself isn’t research, but it can be a valid research aid if you use it to find relevant primary sources.
At least some editor will usually make sure Wikipedia is correct. There’s nobody ensuring chatGPT is correct.
Just using the “information” it regurgitates isn’t very useful, which is why I didn’t recommend doing that. Whether the information summarized by Wikipedia and ChatGPT is accurate really isn’t important, you use those tools to find primary sources.
I’d argue that it’s very important, especially since more and more people are using it. Wikipedia is generally correct and people, myself included, edit incorrect things. ChatGPT is a black box and there’s no user feedback. It’s also stupid to waste resources to run an inefficient LLM that a regular search and a few minutes of time, along with like a bite of an apple worth of energy, could easily handle. After all that, you’re going to need to check all those sources chatGPT used anyways, so how much time is it really saving you? At least with Wikipedia I know other people have looked at the same things I’m looking at, and a small percentage of those people will actually correct errors.
Many people aren’t using it as a valid research aid like you point out, they’re just pasting directly out of it onto the internet. This is the use case I dislike the most.
I think it’s because it causes all of Lemmy to have a collective ragegasm. It’s kind of funny in a trollish way. I support OP in this endeavour.
Even the small local AI niche hates ChatGPT, heh.
How would you phrase this differently?
“It looks like this feature was added 5 years ago.”
If asking for confirmation, just ask for confirmation.
Apparently the feature was added 5 years ago.
So, your solution is for the user to provide less information and then respond to people to inform them if they used chatgpt if asked?
It just seems like much less reps are used if they say they used ChatGPT.
Additionally, if they don’t say it and no one asks, in the future people might look for a source, at least this way there is a warning there might be misinformation.
I know what your going to say next, they should research the thing themselves independently of ChatGPT, but honestly, they probably don’t care/have the time to look up released notes over the past few years.
My partner describes her bowel movements to me when she returns from her daily ablutions.
People also say they googled, unfortunately
Not the same thing.
google allows for the possibility that the user was able to think critically about sources that a search returned
chapGPT is drunk uncle confidently stating a thing they heard third hand from Janet in accounting and then taking him at his word
Unfortunately now Google is ChatGPT. It provides its own shitty AI answers, and its search results have been corrupted by an ocean of slop.
but at least your drunk uncle won’t boil the oceans in the process too
How dare you, my drunk uncle is completely capable of boiling the oceans! He was even boasting about it at our last family dinner!
Ai’s provide you with links so you can use your critical thinking
Google results are like:
Is peertube compatible with the fediverse?
ADVERT
Introduction: A lot of people wonder if peertube works with other peertube instances…
ADVERT
What is peertube? Peertube was set up in 1989 by john Peer…
Pop-up: do you like our publication? Give us your email address.
ADVERT
Why you might want to set up peertube: peertube is a decentralised way…
ADVERT
Please support us! From £30 a month you can help us to write more.
Wat is the fediverse? The fediverse is a technology…
ADVERT
Articles you may also like:
ADVERT
So can peertube instances talk to each other?
ADVERT
the answer is yes.
ADVERT
In conclusion, peertube is very…
Comments (169)
John Smith wrote at 12:28 on Friday
At this point, ad blocker is pretty much mandatory for me, just like how antivirus software used to be a decade ago (probably more)
PLEASE DISABLE YOUR AD BLOCKER! We use the revenue from annoying you to feed our starving CEO!
what do you mean? it’s like being angry that people bring up I googled something
google: I checked the listing of news sites to find information about a world event directly from professionals who double check their sources
chatGPT: I asked my hairstylist their uninformed opinion on a world event based on overheard conversations
I mean a moron could find the wrong information from google and your hairstylist could get lucky and be right, but odds are one source provides the opportunity for reliable results and the other is random and has a massive shit ton of downsides.
Lots of legitimate concerns and issues with AI, but if you’re going to criticize someone saying they used it you should at least understand how it works so your criticism is applicable.
It is useful. Chatgpt performs web searches, then summarizes the results in a way customized to what you asked it. It skips the step where you have to sift through a bunch of results and determine “is this what I was looking for?” and “how does this apply to my specific context?”
Of course it can and does still get things wrong. It’s crazy to market it as a new electronic god. But it’s not random, and it’s right the majority of the time.
It might be wrong more often than you think
https://futurism.com/study-ai-search-wrong
Besides the other commenter highlighting the specific nature of the linked study, I will say I’m generally doing technical queries where if the answer is wrong, it’s apparent because the AI suggestion doesn’t work. Think “how do I change this setting” or “what’s wrong with the syntax in this line of code”. If I try the AI’s advice and it doesn’t work, then I ask again or try something else.
I would be more concerned about subjects where I don’t have any domain knowledge whatsoever, and not working on a specific application of knowledge, because then it could be a long while before I realize the response was wrong.
Googling at least until fairly recently meant „I consulted an index of Internet”. It is a means to get to the bit of information.
Asking ChatGPT is like asking a well-behaved parrot in the library and believing every word it says instead of reading the actual book the librarian would point you towards.
I use it instead of search most of the time nowadays. Why? Because it does proceed to google it for me, parse search results, read the pages behind those links, summarize everything from there, present it to me in short condensed form and also provide the links where it got the info from. This feature been here for a while.
And it still gets shit wrong.
It’s all good, Lemmy users are strongly anti-ai and are genuinely learning right now that chatgpt, mistral, perplexity etc can search the web
Let’s just keep adding more and more layers like a game of telephone!
What do you mean?
Go ask chatGPT
I don’t use ChatGPT, I use LM Studio which runs Local LLMs (it’s like AI you can run locally on your PC, I have solar and a solar battery so this means there’s no co2 emissions from my queries, I primarily use this for coding questions and practice, translations from Russian/Ukrainian/French, practising french, etc), then I use mistral AI second (french based), then third perplexity (american)
I also use Ecosia.org for searches as well
I asked mistralai/mistral-small-3.2 to elaborate on what you said, Is this what you meant?
The phrase “Let’s just keep adding more and more layers like a game of telephone!” is a metaphorical way of expressing skepticism or concern about the accuracy and reliability of information as it gets passed through multiple layers of interpretation, especially when involving AI systems.
Here’s what it likely means in this context:
Game of Telephone Analogy: In the classic “game of telephone” (or “Chinese whispers”), a message is whispered from one person to another in a line, and by the time it reaches the end, the original message is often distorted or completely changed due to mishearing, misinterpretation, or intentional alteration. The user is suggesting that relying on AI systems to search, summarize, or interpret web content might introduce similar layers of potential inaccuracies or biases.
Layers of Interpretation: The “layers” could refer to the steps involved in using an AI system to access and summarize information:
Concerns About Accuracy: The user might be implying that each additional “layer” (especially when involving AI) could introduce errors, biases, or misinterpretations, much like how a message gets distorted in the game of telephone.
Hostility Toward AI: Given the context you provided (Lemmy users being “strongly anti-AI”), this comment likely reflects a broader distrust of AI’s ability to accurately and reliably convey information without introducing new problems.
In essence, the user is cautioning against blindly trusting AI systems to handle information retrieval and summarization, suggesting that doing so could lead to a breakdown in accuracy or meaning, similar to how a message degrades in a game of telephone.
We aren’t any a. I. We just ain’t lemmings.
I use a I as an inspiration. That’s all it is. A fancy fucking writing prompt.
You use AI for writing prompts? That’s pretty cool, a lot of people use AI for writing prompts, a lot of writers say it’s great for getting rid of writers block