

Yeah but that’s not limited to physical DVD size constraints.
Yeah but that’s not limited to physical DVD size constraints.
transmitting over 125,000 gigabytes of data per second over 1,120 miles (1,802 kilometers).
Please include usable metrics in the title
XMPP is significantly less decentralized, allowing them to “”“cut corners”“” compared to Matrix protocol implementation, and scale significantly better. (In heavy quotes, as XMPP isn’t really cutting corners, but true decentralization requires more work to achieve seemingly “the same result”)
An XMPP or IRC channel with a few thousand users is no problem, wheras Matrix can have problems with that. On the other hand, any one Matrix homeserver going down does not impact users that aren’t specifically on that homeserver, whereas XMPP is centralized enough that it can take down a whole channel.
Meanwhile IRC is a 90s protocol that doesn’t make any sense in the modern world of mainly mobile devices.
XMPP also doesn’t change much, the last proper addition to the protocol (from what I can tell, on the website) was 2024-08-30 https://xmpp.org/extensions/xep-0004.html
Why the downvotes? Apple silicon ARM is not the same ISA as any existing ARM. There’s extra undocumented instructions and features. Unless you want to reverse engineer all that, and make your own ARM CPU, you cannot run (all of) macOS on an off the shelf ARM chip. Making it effectively “impossible”.
No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.
The main issue is UX imo. On Windows 11, it’s “5 clicks”, but you have to open the settings app and find the setting two submenus deep. On KDE, it’s right click > configure application launcher > toggle setting > apply.
I was very annoyed when I got this, but remembered that it’s KDE, and turning it off is 4 clicks. Proprietary software often doesn’t allow you to turn this off (easily). Windows has this “feature”, where is the setting?
I don’t think it’s a productive “feature”, but considering it can be turned off so easily I don’t consider it a complete showstopper.
Scrapers can send these challenges off to dedicated GPU farms or even FPGAs, which are an order of magnitude faster and more efficient.
Lets assume for the sake of argument, an AI scraper company actually attempted this. They don’t, but lets assume it anyway.
The next Anubis release could include (for example), SHA256 instead of SHA1. This would be a simple, and basically transparent update for admins and end users. The AI company that invested into offloading the PoW to somewhere more efficient now has to spend significantly more resources changing their implementation than what it took for the devs and users of Anubis.
Yes, it technically remains a game of “cat and mouse”, but heavily stacked against the cat. One step for Anubis is 2000 steps for a company reimplementing its client in more efficient hardware. Most of the Anubis changes can even be done without impacting the end users at all. That’s a game AI companies aren’t willing to play, because they’ve basically already lost. It doesn’t really matter how “efficient” the implementation is, if it can be rendered unusable by a small Anubis update.
Someone making an argument like that clearly does not understand the situation. Just 4 years ago, a robots.txt was enough to keep most bots away, and hosting personal git on the web required very little resources. With AI companies actively profiting off stealing everything, a robots.txt doesn’t mean anything. Now, even a relatively small git web host takes an insane amount of resources. I’d know - I host a Forgejo instance. Caching doesn’t matter, because diffs berween two random commits are likely unique. Ratelimiting doesn’t matter, they will use different IP (ranges) and user agents. It would also heavily impact actual users “because the site is busy”.
A proof-of-work solution like Anubis is the best we have currently. The least possible impact to end users, while keeping most (if not all) AI scrapers off the site.
“Yes”, for any bits the user sees. The frontend UI can be behind Anubis without issues. The API, including both user and federation, cannot. We expect “bots” to use an API, so you can’t put human verification in front of it. These "bots* also include applications that aren’t aware of Anubis, or unable to pass it, like all third party Lemmy apps.
That does stop almost all generic AI scraping, though it does not prevent targeted abuse.
Surely there wasn’t an exploit on the half a year out of date kernel (Article screenshots from April 2025, uname kernel release from a CBL-Mariner released September 3rd 2024).