⭒˚。⋆ 𓆑 ⋆。𖦹

  • 1 Post
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • Last book: The West Passage by Jared Pechaček. Delightfully surreal fantasy; highest recommendation. Almost purposefully confusing at times, it wants you to infer the bizarre structure of its world through the mysteries it presents rather than ever try to over-explain itself.

    Current book: Everything Must Go, The Stories We Tell About the End of the World by Dorian Lynskey. Also strong recommend. I’ve been feeling rather apocalyptic lately due to the everything and some dramatic life changes I’m going through and this is having the intended effect. By taking an unflinching, academic (yet sometimes humrous) look at various eschatological stories they become demystified and help reduce the anxiety. Do we really believe we’ll be the lucky generation to witness the closure of all things? Probably not. But also … maybe?


  • This is the biggest factor for me now, too. Not to go all old man Millennial, but humor me for a second:

    I’ve been playing games since the NES era. The scene used to be a lot slower and while I never played every single game that came out or even owned every console, I was enough of a hobbyist that I could still follow all the major developments. These days, there’s simply TOO MUCH. And I don’t mean to imply that an abundance of choices is bad, just that it’s an absolute firehose that no one person can follow. You have to dedicate yourself to your specific interests, your specific niches. These can well be served by indies and the whole back library of games.

    Because that’s the other thing, we’re starting to more thoroughly recognize games as art, as a library rather than as pure content. Unless you are absolutely committed to sucking on the end of that firehose to catch all the new content at its zenith, what’s really the point?

    Fuck man, it’s time to go back to the NES for me, pick up all those games I never beat as a kid and sink 10,000 hours into learning how to speedrun some of my favorites. There’s simply no need to spend $70-80 fucking dollars on subpar, rushed, exploitative content. Fuck 'em.



  • This got me through so many shifts working in a call center. Could download PuTTY and run it from my user folder without admin permissions and then connect to one of the servers.

    Been awhile since I played, but I remember my first ascension was Draconian Skald. I think the rules have changed quite a bit, but I used to love Troll Monk of Cheibriados, too. Stoneskin + Stoneform and a shield of reflection absolutely WRECKED the Elven Halls. For every step I’d take the elves would get like 4-5 turns and fire off a volley of arrows. I’d take practically no damage and a large portion of them would get reflected back and kill the elves themselves. Literally just waltzing through the place. Slow is life.

    Transmuter used to be a lot of fun, too, but they changed it significantly over the years. I remember playing as a Felid one time and I died while in spider form. Because Felids get several lives, I reincarnated on the same level, ran back to my corpse and condensed it into a poison potion to chuck back at enemies.

    I find it to be one of the simpler roguelikes to learn, but it takes awhile to master and there are some very cool interactions once you get the vibe.



  • Oh yes, I think Peter Watts is a great author. He’s very good at tackling high concept ideas while also keeping it fun and interesting. Blindsight has a vampire in it in case there wasn’t already enough going on for you 😁

    Unrelated to the topic at hand, I also highly recommend Starfish by him. It was the first novel of his I read. A dark, psychological thriller about a bunch of misfits working a deep sea geothermal power plant and how they cope (or don’t) with the situation at hand.


  • Blindsight mentioned!

    The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

    This has been my biggest problem with it. It places a cognitive load on me that wasn’t there before, having to cut through the noise.






  • I can’t stop thinking about this piece from Gary Marcus I read a few days ago, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. It’s a fascinating read on the differences of connectionist vs. symbolic AI and the merging of the two into neurosymbolic AI from someone who understands the topic.

    I recommend giving the whole thing a read, but this little nugget at the end is what caught my attention,

    Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes?

    Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.


    AGI is still rather poorly defined, and taking cues from Ed Zitron (another favorite of mine), there will be a moving of goalposts. Scaling fast and hard to several gigglefucks of power and claiming you’ve achieved AGI is the next big maneuver. All of this largely just to treat AI as a blackhole for accountability; the super smart computer said we had to take your healthcare.




  • I don’t really have a concise answer, but allow me to ramble from personal experience for a bit:

    I’m a sysadmin that was VERY heavily invested in the Microsoft ecosystem. It was all I worked with professionally and really all I had ever used personally as well. I grew up with Windows 3.1 and just kept on from there, although I did mess with Linux from time to time.

    Microsoft continues to enshittify Windows in many well-documented ways. From small things like not letting you customize the Start menu and task bar, to things like microstuttering from all the data it’s trying to load over the web, to the ads it keeps trying to shove into various corners. A million little splinters that add up over time. Still, I considered myself a power user, someone able to make registry tweaks and PowerShell scripts to suit my needs.

    Arch isn’t particularly difficult for anyone who is comfortable with OSes and has excellent documentation. After installation it is extremely minimal, coming with a relatively bare set of applications to keep it functioning. Using the documentation to make small decisions for yourself like which photo viewer or paint app to install feels empowering. Having all those splinters from Windows disappear at once and be replaced with a system that feels both personal and trustworthy does, in a weird way, kind of border on an almost religious experience. You can laugh, but these are the tools that a lot of us live our daily lives on, for both work and play. Removing a bloated corporation from that chain of trust does feel liberating.


    As to why particularly Arch? I think it’s just that level of control. I admit it’s not for everyone, but again, if you’re at least somewhat technically inclined, I absolutely believe it can be a great first distro, especially for learning. Ubuntu has made some bad decisions recently, but even before that, I always found myself tinkering with every install until it became some sort of Franken-Debian monster. And I like pacman way better than apt, fight me, nerds.


  • The latest We’re In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.

    As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it’s always been kind of clear to me that AI would be more symbolic than connectionist. Of course it’s going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just “awaken” once a certain number of connections are made.

    Cons of Connectionist AI: Interpretability: Connectionist AI systems are often seen as “black boxes” due to their lack of transparency and interpretability.

    Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.

    Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can’t comprehend. We don’t have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.


    EDIT: In further response to the article itself, I’d like to point out that misalignment is a very real problem but is anthropomorphized in ways it absolutely should not be. I want to reference a positive AI video, AI learns to exploit a glitch in Trackmania. To be clear, I have nothing but immense respect for Yosh and his work writing his homegrown Trackmania AI. Even he anthropomorphizes the car and carrot, but understand how the rewards are a fairly simple system to maximize a numerical score.

    This is what LLMs are doing, they are maximizing a score by trying to serve you an answer that you find satisfactory to the prompt you provided. I’m not gonna source it, but we all know that a lot of people don’t want to hear the truth, they want to hear what they want to hear. Tech CEOs have been mercilessly beating the algorithm to do just that.

    Even stripped of all reason, language can convey meaning and emotion. It’s why sad songs make you cry, it’s why propaganda and advertising work, and it’s why that abusive ex got the better of you even though you KNEW you were smarter than that. None of us are so complex as we think. It’s not hard to see how an LLM will not only provide sensible response to a sad prompt, but may make efforts to infuse it with appropriate emotion. It’s hard coded into the language, they can’t be separated and the fact that the LLM wields emotion without understanding like a monkey with a gun is terrifying.

    Turning this stuff loose on the populace like this is so unethical there should be trials, but I doubt there ever will be.