⭒˚。⋆ 𓆑 ⋆。𖦹

  • 1 Post
  • 61 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • not only is Windows not very profitable anymore, the real money is at businesses.

    Hear me out, this is exactly why they care. Windows as a product isn’t profitable anymore, but as a market share it is. Apple has always enjoyed their locked down ecosystem and Google is trying to completely block side loading on devices we already largely don’t have control over the bootloader. It’s no secret Microsoft has been seething with jealousy for years.

    https://gs.statcounter.com/os-market-share/desktop/worldwide

    You’re a soulless corporate ghoul, how do you make those numbers work for you? Why do you think they have the absolute gall to tell you to throw your computer out and get one that supports TPM 2.0? Why do you think there are still so many people willing or not that will swallow that bitter pill that’s Windows 11?

    I’m not trying to call you out in particular here or anything, but I think it’s foolish to assume they don’t


  • Some others have already said the “embrace, extend, extinguish” but here’s my take on it. Pair it with Secure Boot and TPM 2.0

    • Embrace: Secure Boot can already work with Linux, how lucky! This gives them not exactly control, but authoritative denial over your boot process and hardware.
    • Extend: This is the part that remains to be seen. If they feel threatened enough by the shift in the gaming landscape, mind you not over losing out on sales or the hearts of gamers or anything, but again control, they may begin to make Linux offerings. A concession to allow an honest to god, thick Office client on Linux would certainly appeal to some. Adobe gets in on that action to back them up with Photoshop and Activision with Call of Duty, etc.
    • Extinguish: TPM 2.0. One of the less talked about features of this is remote attestation (“Remote attestation allows changes to the user’s computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users modifying their software to circumvent commercial digital rights restrictions.” - DRM). We’re already seeing this with CoD on Windows. They’ll allow you to run much requested Windows software on Linux, even provide direct support possibly, but at the cost of not precisely control but authoritative denial. Which still works out to be control in most ways since if you want to use the software and they are to remotely attest, they can also insist that part of that attestation is you running some sort of telemetry or not running software they disagree with.

    The reason I think this route is highly likely is because it plays well with uninformed consumers. To the untrained eye it looks like they’re giving ground and actually allowing for broader support of their software while effectively gaining control over the environment once again and removing the biggest benefits of running FOSS on your system.


  • I don’t know why I expected a Zitron-esque lambsating from fortune.com, but reading the article is disappointing,

    But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.

    Sure. Let’s blame anything but the AI 🙄


  • YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist.

    I’ll grant you that the possibility exists. But like the idea that all your atoms could perfectly align such that you could run through a solid brick wall, the improbability makes it a moot point.

    Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

    This is the part I take umbrage with. I agree, LLMs take up too much oxygen in the room, so let’s set them aside and talk about neural networks. They are a connectionist approach which believes that adding enough connections will eventually form a proper model, waking sentience and AI from the machine.

    Hinton and Sutskever continued [after their seminal 2012 article on deep learning] to staunchly champion deep learning. Its flaws, they argued, are not inherent to the approach itself. Rather they are the artifacts of imperfect neural-network design as well as limited training data and compute. Some day with enough of both, fed into even better neural networks, deep learning models should be able to completely shed the aforementioned problems. “The human brain has about 100 trillion parameters, or synapses,” Hinton told me in 2020.

    "What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain.

    “Deep learning is going to be able to do everything,” he said.

    (Quoting Karen Hao’s Empire of AI from the Gary Marcus article)

    I keep citing Gary Marcus is because he is “an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI)” [wiki]

    The reason all this is so important is because it refutes the idea that you can simply scale, or brute-force power, your way to a robust, generalized model.

    If we could have only three knowledge frameworks, we would lean heavily on topics that were central to Kant’s Critique of Pure Reason, which argued, on philosophical grounds, that time, space, and causality are fundamental.

    Putting these on computationally solid ground is vital for moving forward.


    So ultimately talking about any of this is putting the cart before the horse. Before we even discuss the idea that any possible approach could achieve sentience I think we first need to actually understand what sentience is in ourselves and how it was formed. There currently are just too many variables to solve the equation. I am outright refuting the idea that an imperfect understanding, using imperfect tools, with imperfect methods with any amount of computer power, no matter how massive, could chance upon sentience. Unless you’re ready to go the infinite monkeys route.

    We may get things that look like it, or emulate it to some degree, but even then we are incapable of judging sentience,

    (From “Computer Power and Human Reason, From Judgement To Calculation” (1976))

    This phenomenon is comparable to the conviction many people have that fortune-tellers really do have some deep insight, that they do “know things,” and so on. This belief is not a conclusion reached after a careful weighing of evidence. It is, rather a hypothesis which, in the minds of those who hold it, is confirmed by the fortune-teller’s pronouncements. As such, it serves the function of the drunkard’s lamppost we discussed earlier: no light is permitted to be shed on any evidence that might disconfirming and, indeed, anything that might be seen as such evidence by a disinterested observer is interpreted in a way that elaborates and fortifies the hypothesis.

    It is then easy to understand why people are conversing with ELIZA believe, and cling to the belief, that they are being understood. The “sense” and the continuity the person conversing with ELIZA perceives is supplied largely by the person himself. He assigns meanings and interpretations to what ELIZA “says” that confirm his initial hypothesis that the system does understand, just as he might do with what a fortune teller says to him.

    We been doing this since the first chatbot ELIZA in 1966. EDIT: we are also still trying to determine sentience in other animals. Like, we have a very tough time with this.

    It’s modern day alchemy. It’s such an easy thing to imagine, why couldn’t it be done? Surely there’s some scientific formula or breakthrough just out of reach that eventually that could crack the code. I dunno, I find myself thinking about Fermi’s paradox and the Great Filter more …


  • There’s no getting through to you people. I cite sources, structure arguments, make analogies, and rely on solid observations of what we see today and how it works and you call MY argument hand-wavey when you go on to say things like,

    LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

    But then the Wright brothers happened.

    Do you hear yourself?

    I admit that the Chinese Room thought experiment is just that, a thought experiment. It does not cover the totality of what’s actually going on, but it remains an apt analogy and if it seems limiting, that’s because the current implementation of neural nets are limiting. You can talk about mashing them together, modifying them in different ways to skew their behavior, but the core logic behind how they operate is indeed a limiting factor.

    AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.

    Has it struck a nerve?


    It’s like asserting you’re going to walk to India by picking a random direction and just going. It could theoretically work but,

    1. You are going to encounter a multitude of issues with this approach, some surmountable, some less so
    2. The lack of knowledge and foresight makes this a dangerous approach; despite being a large country not all trajectories will bring you there
    3. There is immense risk of bad actors pulling a Columbus and just saying, “We’ve arrived!” while relying on the ‘unknowable’ nature of these things to obfuscate and reduce argument

    I fully admit to being no expert on the topic, but as someone who has done the reading, watched the advancements, and experimented with the tech, I remain more skeptical than ever. I will believe it when I see it and not one second before.


  • deep breath OK here we go: Hard NOOOOOOOOOO.

    First let’s start with the two different schools of AI, Symbolic and connectionist AI.

    When talking about modern implementations of AI, mostly those generative and LLMs, we’re talking about connectionist or neural networks approaches. A good example of this is the Chinese Room Argument which I first read about in Peter Watts’ Blindsight (just a fun sci-fi, first encounter book, check it out sometime).

    “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”

    It’s worth reading the Stanford Encyclopedia article for some of the replies, but we’ll say that the room operator or the LLM does not have a direct understanding, even if some representation of understanding is produced.

    On the other hand, symbolic AI has been in use for decades for extremely narrow approaches. Take a look at any game-playing AI for example, something like StackRabbit for Tetris or Yosh’s delightful Trackmania playing AI. Or for something more scientific, animal pose tracking like SLEAP.

    Gary Marcus makes an argument for a merging of the two into something called neurosymbolic AI. This certainly shows promise, but in my mind there are two big problems with this:

    1. The necessary symbolic algorithms that the connectionist models invoke are still narrow and would likely need time and focused development to plug into the models and,
    2. The chain-of-thought reasoning of LLMs has been shown to be fragile and exceptionally poor at generalization, Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens. This is what would be required to properly parse data and hand it off to a more symbolic approach

    (I feel like I had more articles I wanted to link here, as if anyone was already going to read all that. Possible edits with more later …)


    So why are there so many arguments for sentience and super-intelligence? Well first and most cynically, manipulation. Returning to that first article, one of the big cons of connectionist AI is that it’s not very interpretable, it’s a black box. Look at Elon Musk’s Grok and the recent mecha Hitler episode. How convenient is it that you can convince people that your AI is “super smart” and can digest all this data to arrive at the one truth while putting your thumb on the scale to make it say what you want. Consider this in terms of the Chinese Room thought experiment. If the rulebook says to reply to the question, “Do you like dogs?” with the answer, “No, I hate them” this does not reflect an opinion of the room operator nor any real analysis of data. It’s an obfuscated opinion someone wrote directly into the rulebook.

    Secondly, and perhaps a bit more charitably, they’re being duped. AI psychosis is the new hot phrase, but I wouldn’t go that far. The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con from July 4th, 2023 (!!!) does a good job of explaining the self-fulfilling nature of it. The belief isn’t reached after a careful weighing of evidence, it’s reached when pre-formed hypothesis (the machine is smart) is validated by interpreting the output as true understanding. Or something.

    So again, WHY? Back to Gary Marcus and the conclusion of the previously linked article:

    “Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes? Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.”

    Would this surprise you?

    People want you to believe that amazing things are happening fast, in the realm of the truly high-minded and even beyond that which is known! But remember the burden of proof lies with them to demonstrate that the thing has happened, not that it could’ve happened just outside your understanding. Remain skeptical, I’ll believe it when I see it. Until then, it remains stupider than a parrot because a parrot actually understands desire and intent when it asks for a cracker. EDIT: https://www.youtube.com/watch?v=zzeskMI8-L8


    Gary Marcus nails it again, massive respect for the dude: LLMs are not like you and me—and never will be.




  • The thing I remember most about the early internet was staking out your own weird little corners. There wasn’t much of any “everything” site yet, so you’d find the things that appealed to you and settle there.

    A lot of my early tastes in indie and experimental music were formed by the Music message board on GameFAQs. I was already going there for the walkthroughs and found my way to some of the under-populated, miscellaneous boards.

    You experienced meeting people with names (even if just pseudonyms) and ideas that weren’t just blended into an algorithmic slurry.

    It’s why I like Lemmy, I can feel a bit of that here. Still, I have a hard time surrendering things like Twitter and moved instantly to Bluesky where I continue the trend …



  • LLMs are a tool, and all tools can be repurposed or repossessed.

    That’s just simply not true. Tools are usually quite specific in purpose, and often times the tasks they accomplish cannot be undone by the same tool. A drill cannot undrill a hole. I’m familiar with ML (machine learning) and the many, many legitimate uses it has across a wide range of fields.

    What you’re thinking of, I suspect, is a weapon. A resource that can be wielded equally by and against each side. The pains caused on the common person by the devaluation of our art and labor can’t be inflicted against the corpofascists; for them, that’s the point. They are the ones selling these tools to you and you cannot defeat them by buying in. And I do very much mean the open source models as well. Waging war on their terms, with their tools and methods (repossessed as they may be) is still a losing proposition.

    By ignoring this technology and sticking our fingers in our ears, we are allowing them to reshape out the technology works, instead of molding it for our own purposes. It’s not going to go away, and thinking that is just as foolish as believing the Internet is a fad.

    Time will tell. How are your NFTs doing? (sorry, that was mean)

    The negative preconceived notion bias is really not helping matters.

    Guilty as charged, I’m pretty strongly anti-AI. But seriously, watch that ad and tell me that the disorienting cadence of speech and uncanny, overly detailed generated images look good? Most of us have seen what’s on offer and we’re telling you, we’re tired.


    Look, I do apologize, I’m very much trying not to be overly aggro here or attack you in any way. But I think discussions about the religious overtones and belief systems of the BJ are exactly where we’re at.

    How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

    This is a really interesting article. Gary Marcus is a lot more positive on AI than myself I think, but that’s understandable given his background. If I do concede that some form of AGI is inevitable, I think we are within our rights to demand that it is indeed the tool we deserve, and not just snake oil.

    AI art still ugly, sorry not sorry.


  • Kind of really disagree with this video 😕

    I’ve only read the first two Dune novels, and that awhile ago, so I’m poorly equipped to have this conversation, but the video focuses on the idea that fascists are perpetuating it to keep powerful tools of liberation out of the hands of the proletariat. You wouldn’t agree with a fascist, would you? While there may be some truth to this, it completely ignores the cause of the BJ to begin with. It was in fact a rebellion by the people against those tools.

    Even taken at face value, the video seems to posit that because the fascists can’t be trusted, AI is indeed a powerful tool for liberation. I don’t see that as the case. It hardly needs to be said, but Dune is a sci-fi novel, the context of which does not currently apply to our real world circumstances. AI is the tool of the fascists, used for oppression. I don’t think it can simply be repurposed for liberation, that’s a naive interpretation that ignores all of the actual ways in which the current implementations of AI work.

    Disgusting AI-generated add for merch halfway through.

    EDIT: the point is further confounded by the fact that the BJ eliminated “computers, thinking machines, and conscious robots”, not simply AI. Many of those are tools that could empower people but that doesn’t mean you can just lump them together.


  • I don’t really have a concise answer, but allow me to ramble from personal experience for a bit:

    I’m a sysadmin that was VERY heavily invested in the Microsoft ecosystem. It was all I worked with professionally and really all I had ever used personally as well. I grew up with Windows 3.1 and just kept on from there, although I did mess with Linux from time to time.

    Microsoft continues to enshittify Windows in many well-documented ways. From small things like not letting you customize the Start menu and task bar, to things like microstuttering from all the data it’s trying to load over the web, to the ads it keeps trying to shove into various corners. A million little splinters that add up over time. Still, I considered myself a power user, someone able to make registry tweaks and PowerShell scripts to suit my needs.

    Arch isn’t particularly difficult for anyone who is comfortable with OSes and has excellent documentation. After installation it is extremely minimal, coming with a relatively bare set of applications to keep it functioning. Using the documentation to make small decisions for yourself like which photo viewer or paint app to install feels empowering. Having all those splinters from Windows disappear at once and be replaced with a system that feels both personal and trustworthy does, in a weird way, kind of border on an almost religious experience. You can laugh, but these are the tools that a lot of us live our daily lives on, for both work and play. Removing a bloated corporation from that chain of trust does feel liberating.


    As to why particularly Arch? I think it’s just that level of control. I admit it’s not for everyone, but again, if you’re at least somewhat technically inclined, I absolutely believe it can be a great first distro, especially for learning. Ubuntu has made some bad decisions recently, but even before that, I always found myself tinkering with every install until it became some sort of Franken-Debian monster. And I like pacman way better than apt, fight me, nerds.





  • Protontricks can help for some games. Personally I used it to install Openplanet for Trackmania which doesn’t have any sort of explicit Linux support specified.

    What Protontricks does is allow you to run installation files within the context of a steam game, as you mentioned. Simply launch Protontricks and select the game you’re trying to modify and it will mount it properly for you. Then choose “Run an arbitrary executable (.exe/.msi/.msu)” and proceed to run the installer as you would normally.

    Sometimes the path can still be a bit janky. For example when Openplanet wanted to install to the Trackmania directory as mounted through Protontricks, I had to specify: Z:\home<USERNAME>.steam\steam\steamapps\common\Trackmania.


  • The Safeways here in WA (at least in parts) have shifted from the old weight-based system(?) to some new AI/camera system. It gets upset if you move incorrectly in front of it because it thinks you may have bagged something you hadn’t scanned yet.

    Last time I went shopping I got stuck waiting for 5+ minutes when the machine flagged me and there wasn’t any available staff to review it with me. When the manager finally came over, we had to watch the video capture of me scanning (love the privacy invasion) and then she counted the items in my bag “just to make sure”. Afterwards she stood behind me and watched me finish scanning “in case it happens again”. Whatever. This feels neither efficient nor convenient. It feels like something else.