Lvxferre [he/him]

I have two chimps within, called Laziness and Hyperactivity. They smoke cigs, drink yerba, fling shit at each other, and devour the faces of anyone who comes close to them.

They also devour my dreams.

  • 0 Posts
  • 357 Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle



  • By “moderation” I don’t mean the unpaid volunteers taking care of individual subreddits. I mean the Reddit employees and/or tools enforcing site-wide rules, answering directly to the administrators and/or Reddit Inc.

    In other words I think Reddit is implementing some automated enforcement of global rules, through LLM bots or similar, and since the bots don’t really understand what users say, they’re fucking it up all the time.







  • I’ve interacted with k0e3 in the past, they’re no LLM. Even then, a quick profile check shows it. But you didn’t check it, right? Of course you didn’t, it’s easier to vomit assumptions and re-eat your own vomit, right?

    And the comment’s “tone” isn’t even remotely close to typical LLM output dammit. LLMs avoid words like “bullshit”, contracting “it is not” into “it’s not” (instead of “it isn’t”), or writing in first person. The only thing resembling LLM output is the em dash usage—but there are a thousand potential reasons for that.

    (inb4 assumer claims I’m also an LLM because I just used an em dash and listed three items.)





  • You don’t get it.

    I do get it. And that’s why I’m disdainful towards all this “simulated reasoning” babble.

    In the past, the brick throwing machine was always failing its target and nowadays it is almost always hitting near its target.

    Emphasis mine: that “near” is a sleight of hand.

    It doesn’t really matter if it’s hitting “near” or “far”; in both cases someone will need to stop the brick-throwing machine, get into the construction site (as if building a house manually), place the brick in the correct location (as if building a house manually), and then redo operations as usual.

    In other words, “hitting near the target” = “failure to hit the target”.

    And it’s obvious why it’s wrong; the idea that an auto-builder should throw bricks is silly. It should detect where the brick should be placed, and lay it down gently.

    The same thing applies to those large token* models; they won’t reach anywhere close to reasoning, just like a brick-throwing machine won’t reach anywhere close to an automatic house builder.

    *I’m calling it “large token model” instead of “large language model” to highlight another thing: those models don’t even model language fully, except in the brain of functionally illiterate tech bros who think language is just a bunch of words. Semantics and pragmatics are core parts of a language; you don’t have language if utterances don’t have meaning or purpose. The nearest of that LLMs do is to plop some mislabelled “semantic supplement” - because it’s a great red herring (if you mislabel something, you’re bound to get suckers confusing it with the real thing, and saying “I dun unrurrstand, they have semantics! Y u say they don’t? I is so confusion… lol lmao”).

    It depends on how good you are asking the machine to throw bricks (you need to assume some will miss and correct accordingly).

    If the machine relies on you to be an assumer (i.e. to make shit up, like a muppet), there’s already something wrong with it.

    Eventually, brick throwing machines will get so good that they will rely on gravitational forces to place the bricks perfectly and auto-build houses.

    To be blunt that stinks “wishful thinking” from a distance.

    As I implied in the other comment (“Can house construction be partially automated? Certainly. Perhaps even fully. But not through a brick-throwing machine.”), I don’t think reasoning algorithms are impossible; but it’s clear LLMs are not the way to go.


  • You don’t say.

    Imagine for a moment you had a machine that allows you to throw bricks at a certain distance. This shit is useful, specially if you’re a griefer; but even if you aren’t, there are some corner cases for that, like transporting construction material at a distance.

    And yet whoever sold you the machine calls it a “house auto-builder”. He tells you that it can help you to build your house. Mmmh.

    Can house construction be partially automated? Certainly. Perhaps even fully. But not through a brick-throwing machine.

    Of course trying to use the machine for its advertised purpose will go poorly, even if you only delegate brick placement to it (and still build the foundation, add cement etc. manually). You might economise a bit of time when the machine happens to throw a brick in the right place, but you’ll waste a lot of time cleaning broken bricks, or replacing them. But it’s still being sold as a house auto-builder.

    But the seller is really, really, really invested on this auto-construction babble. Because his investors gave him money to create auto-construction tools. And he keeps babbling on how “soon” we’re going to get fully auto house building, and how it’s an existential threat to builders and all that babble. So he tweaks the machines to include “simulated building”. All it does is to tweak the force and aim of the machine, so it’s slightly less worse at throwing bricks.

    It still does not solve the main problem: you don’t build a house by throwing bricks. You need to place them. But you still have some suckers saying “haha, but it’s a building machine lmao, can you prove it doesn’t build? lol”.

    That’s all what “reasoning” LLMs are about.



  • It’s completely off-topic, but:

    We used to have a rather large sisal fibre mat/rug at home, that Siegfrieda (my cat) used to scratch. However my mum got some hate boner against that mat, and replaced it with an actual rug. That’s when Frieda decided she’d hop onto the sofa and chairs and scratch them.

    We bought her a scratching post - and she simply ignored it. I solved the issue by buying two smaller sisal mats, and placing them strategically in places Frieda hangs around. And then slapping her butt every time she used them, for positive behaviour reinforcement (“I’m pet when I scratch it! I should scratch it more!”)

    I’m sharing this to highlight it’s also important to recognise each individual cat has preferences, that might not apply to other cats. She wanted a horizontal surface to scratch; so no amount of scratching posts would solve it.