

Last time I was looking for job I just looked up companies from my field and sent them an email. I sent two emails and got 1 interview. Didn’t get the place though, so I just employed myself then.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
Last time I was looking for job I just looked up companies from my field and sent them an email. I sent two emails and got 1 interview. Didn’t get the place though, so I just employed myself then.
Why do you need to be such a mean jerk about it? I’m familiar with the saying - I just misunderstood you at first, and I already acknowledged my mistake. What more do you want?
I believe that, in reality, wolves domesticated themselves. They started hanging around humans because it was a mutually beneficial arrangement.
Dogs and wolves are the same specie - just a different subspecie. A Chihuahua could breed with a wolf.
Fair enough. “This is gonna twist so many incel knives” just made it sound like that’s what you were refering to.
Incel violence isn’t really the epidemic you’re making it sound to be. There have even been papers written about the lack of it.
I’m not 100% sure but I don’t see why not if that’s the name you gave them when registering as a customer. They all read in my ID as well.
You’re not hoping anything, you’re just trying to look clever by pretending to be worried about phrasing no one actually misunderstood.
Concern trolling / weaponized empathy - Pretending to care as a disguise for judgment or hostility.
I have 3 first names and I’m legally allowed to use any of them.
Ironically, I had to use AI to figure out what this is supposed to mean.
Here’s the intended meaning:
The author is critiquing the misapplication of AI—specifically, the way people adopt a flashy new tool (AI, in this case) and start using it for everything, even when it’s not the right tool for the job.
Hammers vs. screwdrivers: A hammer is great for nails, but terrible for screws. If people start hammering screws just because hammers are faster and cheaper, they’re clearly missing the point of why screws exist and what screwdrivers are for.
Applied to AI: People are now using large language models (like ChatGPT) or generative AI for tasks they were never meant to do—data analysis, logical reasoning, legal interpretation, even mission-critical decision-making—just because it’s easy, fast, and feels impressive.
So the post is a cautionary parable: just because a tool is powerful or trendy (like generative AI), doesn’t mean it’s suited to every task. And blindly replacing well-understood, purpose-built tools (like rule-based systems, structured code, or human experts) with something flashy but poorly matched is a mistake.
It’s not anti-AI—it’s anti-overuse or misuse of AI. And the tone suggests the writer thinks that’s already happening.
I don’t feel like their wealth changes the equation that much. I don’t expect them to just hand me money just because I’m their biological child - and since I’m doing fine on my own anyway, I wouldn’t really need them to.
A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.
No, it generates natural sounding language. That’s all it does.
LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.
The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.
LLMs have more in common with humans than we tend to admit. In split-brain studies, humans have been shown to invent plausible-sounding explanations for their behavior - even when scientists know those explanations aren’t the real reason they acted a certain way. It’s not that these people are lying per se - they genuinely believe the explanations they’re coming up with. Lying implies they know what they’re saying is false.
LLMs are similar in that way. They generate natural-sounding language, but not everything they say is true - just like not everything humans say is true either.
It means Artificial General intelligence and the term has been around for almost three decades.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.___
If you have a better term, what is it?
Large Language Model.
AI is a parent category and AGI and LLM are subcategories of it. Just because AGI and LLM couldn’t be more different, it doesn’t mean they’re not AI.
Why? We already have a specific subcategory for it: Large Language Model. Artificial Intelligence and Artificial General Intelligence aren’t synonymous. Just because LLMs aren’t generally intelligent doesn’t mean they’re not AI. That’s like saying we should stop calling strawberries “plants” and start calling them “fake candy” instead. Call them whatever you want, they’re still plants.
Plumber by training, but these days I work as a self-employed general contractor / handyman.
My thinking is that companies looking for employees get flooded with nearly identical applications, so it’s hard to stand out. I’d rather just email, call, or even show up in person and ask for work - whether they’re actively hiring or not. It shows initiative.
Honestly, I didn’t even want the position - I only applied to keep my unemployment payments going. I spent maybe five minutes writing the application and still got the interview.