• 0 Posts
  • 39 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • …but, just because we’ve gotten ahead of trouble and found solutions thus far, doesn’t mean that an unintended bit of code, or hardware fault, or lack of imagination can’t cause consequences further down the road.

    Absolutely true.

    I guess my thought is that the benefits of our rapid growth outweigh the consequences of forgotten technology. I’ll admit though, I’m not unbiased. I have a vested interest. I do very well professionally being the bridge of some older technologies to modern ones myself.



  • There are a lot of ifs in my examples. It may never happen and we’ll get the advantage of all the ideas that are able to be made reality through accessibility. However, it’s better to think about it now rather than contend with the eventually all at once when a catastrophe occurs. You’re right that doom and gloom isn’t helpful, but I don’t think the broader idea is without merit.

    There are some actual real-life examples that match your theoreticals, but the piece missing is the scale of consequences. What has generally occurred is that the fallout from the old thing failing wasn’t that big of a deal, or that a modern solution could be designed and built completely replacing the legacy solution even without full understanding of it.

    A really really small example of this if from my old 1980s Commodore 64 computer. At the time it used a very revolutionary sound chip to make music and sound effects. It was called the SID chip. Here’s one of the them constructed in 1987.

    It combined digital technologies (which are still used today) with analog technologies (that nobody makes anymore in the same way). Sadly, these chips also have a habit of dying over time because of how they were originally manufactured. With the supply of these continuously shrinking there were efforts to come up with a modern replacement. Keep in mind these are hobbyists. What they came up with was this:

    This is essentially a whole Raspberry Pi computer that fits in the same socket in the 1980s Commodore 64 that accepts the input music instructions from the computer and runs custom written software to produce the same desired output the legacy digital/analog SID chip built in 1982. The computing power in this modern replacement SID chip replacement is more than 30x that of the entire Commodore 64 from the 80s! It could be considered overkill to use so much computing power where the original didn’t, but again, compute is dirt cheap today. This new part isn’t expensive either. Its about $35 to buy.

    This is what I think will happen when our legacy systems finally die without the knowledge to service or maintain them. Modern engineers using modern technologies will replace them providing the same function.




  • Gotcha, thank you for the extra context so I understand your point. I’ll respond to your original statement now that I understand it better:

    I ALSO think the author would prefer more broad technical literacy, but his core arguement seemed to be that those making things dont understand the tech they’re built upon and that unintended consequences can occur when that happens.

    I think the author’s argument on that is also not a great one.

    Lets take your web app example. As you said, you can make the app, but you don’t understand the memory allocation, and why? Because the high level language or framework you wrote it in does memory management and garbage collection. However, there are many, many, MANY, more layers of abstraction beside just your code and the interpreter. Do you know the webserver front to back? Do you know which ring your app or the web server is operating in inside the OS (ring 3 BTW)? Do you know how the IP stack works in the server? Do you know how the networking works that resolves names to IP addresses or routes the traffic appropriately? Do you know how the firewalls work that the traffic is going over when it leaves the server? Back on the server, do you know how the operating system makes calls to the hardware via device drivers (ring 1) or how those calls are handled by the OS kernel (ring 0)? Do you know how the system bus works on the motherboard or how the L1, L2, and L3 cache affect the operation and performance of the server overall? How about that assembly language isn’t even the bottom of abstraction? Below that all of this data is merely an abstraction of binary, which is really just the presence or absence of voltage on a pit or in a bit register in ICs scattered across the system?

    I’ll say probably not. And thats just fine! Why? Because unless your web app is going to be loaded onto a spacecraft with a 20 to 40 year life span and you’ll never be able to touch it again, then having all of that extra knowledge and understanding only have slight impacts on the web app for its entire life. Once you get one or maybe two levels of abstraction down, the knowledge is a novelty not a requirement. There’s also exceptions to this if you’re writing software for embedded systems where you have limited system resources, but again, this is an edge case that very very few people will ever need to worry about. The people in those generally professions do have the deep understanding of those platforms they’re responsible for.

    Focus on your web app. Make sure its solving the problem that it was written to solve. Yes, you might need to dive a bit deeper to eek out some performance, but that comes with time and experience anyway. The author talks like even the most novice people need the ultimately deep understanding through all layers of abstraction. I think that is too much of a burden, especially when it acts as a barrier to people being able to jump in and use the technology to solve problems.

    Perhaps the best example of the world that I think the author wants would be the 1960s Apollo program. This was a time where the pinnacle of technology was being deployed in real-time to solve world moving problems. Human kind was trying to land on the moon! The most heroic optimization of machines and procedures had to be accomplished for even a chance for this to go right. The best of the best had to know every. little. thing. about. everything. People’s lives were at stake! National pride was at stake! Failure was NOT an option! All of that speaks to more of what the author wants for everyone today.

    However, that’s trying to solve a problem that doesn’t exist today. Compute power today is CHEAP!!! High level program languages and frameworks are so easy to understand that programming it is accessible to everyone with a device and a desire to use it. We’re not going to the moon with this. Its the kid down the block that figured out how to use If This Then That to make a light bulb turn on when he farts into a microphone. The beauty is the accessibility. The democratization of compute. We don’t need gatekeepers demanding the deepest commitment to understanding before the primitive humans are allowed to use fire.

    Are there going to be problems or things that don’t work? Yes. Will the net benefit of cheap and readily available compute in the hands of everyone be greater than the detriments, I believe yes. It appears the author disagrees with me.

    /sorry for the wall of text









  • since every cell in a person’s body dies in less than 7 years, by the time of the next term, no cell will have been alive having served the first term and therefore, it’s allowed.

    There are a couple tablespoons of cells that live our entire lives in our brain so that argument should be rejected too. I would expect the GOP rebuttable is that GOP candidates have no brains and therefore their original argument should be valid, which I admit on its surface would be tough to refute given the large body of past behavior of GOP Presidents.

    I would then have to argue that the the Qualifications Clause set forth in Article II, Section 1, Clause 5 of the US Constitution requires Presidential candidates to be at least 35 years, and they’ve just admitted their brainless candidates are 7 years old or less so they would not be be eligible to run for President of the USA.


  • Now, with the advanced automation in building these, combined with the increased difficulty of repair(fine-work soldering, firmware debuging and the like) it makes way more sense to just replace the whole thing.

    The other valid component to your argument is the cost of labor now. It is more expensive to maintain a staff of people to perform repairs and manage the logistics of transporting units to service than it is to simply lose 100% of the wholesale value of the handful of items that fail within the warranty period. Labor, especially skilled labor, is really really expensive in the western world.


  • I think the author was referring to the makers of the device not understanding what theyre making, not so much the end user.

    Just to make sure I’m following your thread of thought, are you referring to this part of the author’s opinion piece or something else in his text?

    “This wouldn’t matter if it were just marketing hyperbole, but the misunderstanding has real consequences. Companies are making billion-dollar bets on technologies they don’t understand, while actual researchers struggle to separate legitimate progress from venture capital fever dreams. We’re drowning in noise generated by people who mistake familiarity with terminology for comprehension of the underlying principles.”


  • The author’s take is detached from reality, filled with hypocrisy and gatekeeping.

    This isn’t nostalgia talking — it’s a recognition that we’ve traded reliability and understanding for the illusion of progress.

    It absolutely is nostalgia talking. Yes your TI-99 fires up immediately when plugged in, and its old. However my Commodore 64 of the same era risk being fried because the 5v regulator doesn’t age well and when fails dumps higher voltage right into the RAM and CPU. Oh, and c64 machines were never built with overvoltage protection because of cost savings. So don’t confuse age with some idea of golden era reliability. RAM ICs were also regularly failed in those age of computers. This is why you had RAM testing programs and socketed ICs. When was the last time, Mr author, you had to replace a failed DIMM in your modern computer?

    Today’s innovation cycle has become a kind of collective amnesia, where every few years we rediscover fundamental concepts, slap a new acronym on them, and pretend we’ve revolutionized computing. Edge computing? That’s just distributed processing with better marketing. Microservices? Welcome to the return of modular programming, now with 300% more YAML configuration files. Serverless? Congratulations, you’ve rediscovered time-sharing, except now you pay by the millisecond.

    By that logic, even the TI-99 he’s loving on is just a fancier ENIAC or UNIVAC. All technology is built upon the era before it. If there was no technological or production cost improvement, we’d just use the old version. Yes, there is a regular shift in computing philosophy, but this is driving by new technologies and usually computing performance descending to be accessibly at commodity pricing. The Raspberry Pi wasn’t a revolutionary fast computer, but it changed the world because it was enough computing power and it was dirt cheap.

    There’s something deeply humbling about opening a 40-year-old piece of electronics and finding components you can actually identify. Resistors, capacitors, integrated circuits with part numbers you can look up. Compare that to today’s black-box system-on-chip designs, where a single failure means the entire device becomes e-waste.

    I agree, there is something appealing about it to you and me, but most people don’t care…and thats okay! To them its a tool to get something done. They are not in love with the tool, nor do they need to be. There were absolutely users of TI-99 and C64 computers in the 80s that didn’t give two shits about the shift register ICs or the UART that made the modem work, but they loved that they could get invoices from their loading dock sent electronically instead of a piece of paper carried (and lost!) through multiple hands.

    Mr. author, no one is stopping you from using your TI-99 today, but in fact you didn’t use it to write your article either. Why is that? Because the TI-99 is a tiny fraction of the function and complexity of a modern computer. Creating something close to a modern computer from discrete components with “part numbers you can look up” would be massively expensive, incredibly slow, and comparatively consume massive amounts of electricity vs today’s modern computers.

    This isn’t their fault — it’s a systemic problem. Our education and industry reward breadth over depth, familiarity over fluency. We’ve optimized for shipping features quickly rather than understanding systems thoroughly. The result is a kind of technical learned helplessness, where practitioners become dependent on abstractions they can’t peer beneath.

    Ugh, this is frustrating. Do you think a surgeon understands how a CCD electronic camera works that is attached to their laparoscope? Is the surgeon un-educated that they aren’t fluent in circuit theory that allows the camera to display the guts of the patient they’re operating on? No, of course not. We want that surgeon to keep studying new surgical technics, not trying to use Ohm’s Law to calculate the current draw of the device he’s using. Mr author, you and I hobby at electronics (and vintage computing) but just because its an interest of ours, doesn’t mean it has to be of everyone.

    What We Need Now: We need editors who know what a Bode plot is. We need technical writing that assumes intelligence rather than ignorance. We need educational systems that teach principles alongside tools, theory alongside practice.

    Such gatekeeping! So unless you know the actual engineering principles behind a device you’re using, you shouldn’t be allowed to use it?

    Most importantly, we need to stop mistaking novelty for innovation and complexity for progress.

    Innovation isn’t just creating new features or functionality. In fact, most I’d argue is taking existing features or functions and delivering them for substantially less cost/effort.

    As I’m reading this article, I am thinking about a farmer watching Mr. author eat a sandwich made with bread. Does the Mr author know when to till soil or plant seed? How about the amount of irrigation Durum wheat needs during the hot season? How about when to harvest? What moisture level should the resulting harvest have before being taking to market or put in long term storage? Yet there he sits, eating the sandwich blissfully unaware of all the steps and effort needed to just make the wheat that goes into the bread. The farmer sits and wonders if Mr author’s next article will be deriding the public on just eating bread and how we’ve forgotten how to grow wheat. Will Mr Author say we need fewer people ordering sandwiches and more people consulting US GIS maps for rainfall statistics and studying nitrogen fixing techniques for soil health? No, probably not.

    The best engineering solutions are often elegantly simple. They work reliably, fail predictably, and can be understood by the people who use them.

    Perhaps, but these simple solutions also can frequently only offer simple functionality. Additionally, “the best engineering solutions” are often some of the most expensive. You don’t always need the best, and if best is the only option, then that may mean going without, which is worst than a mediocre solution and what we frequently had in the past.

    They don’t require constant updates or cloud connectivity or subscription services. They just work, year after year, doing exactly what they were designed to do.

    The reason your TI-99 and my c64 don’t require constant updates is because they were born before the concept of cybersecurity existed. If you’re going to have internet connected devices they its a near requirement to receive updates for security.

    If you don’t want internet connected devices, you can get those too, but they may be extremely expensive, so pony up the cash and put your money where your mouth is.

    That TI-99/4A still boots because it was designed by people who understood every component, every circuit, every line of code.

    It is a machine of extremely limited functionality with a comparably simple design and construction. Don’t think even a DEC PDP 11 mainframe sold in the same era was entirely known by a handful of people, and even that is a tiny fraction of functionality of today’s cheap commodity PCs.

    It works because it was built to work, not to generate quarterly revenue or collect user data or enable some elaborate software-as-a-service business model.

    Take off the rose colored glasses. It was made as a consumer electronics product with the least cost they thought they could get away with and have it still sell. Sales of it absolutely served quarterly revenue numbers even back in the 1980s.

    We used to build things that lasted.

    We don’t need most of these consumer electronics to last. Proof positive is the computer Mr. author is writing his article on is unlikely to be an Intel based 486 running at 33Mhz from the mid 90s (or a 68030 Mac). If it still works, why isn’t he using one? Could it be he wants the new features and functionality like the rest of us? Over-engineering is a thing, and it sounds like what the author is preaching.

    Apologies if my post turned into a rant.




  • partial_accumen@lemmy.worldtoMicroblog Memes@lemmy.worldWelp
    link
    fedilink
    English
    arrow-up
    121
    arrow-down
    1
    ·
    4 days ago

    From your helpful link:

    “If users don’t want to verify their age, or if they’re under 18, they will still be able to have an account with certain features limited. Bluesky will block “adult-appropriate content” and turn off certain features, such as direct messaging.”

    So you can still use Bluesky without age/identity verification, including being able to post.