• 1 Post
  • 406 Comments
Joined 2 years ago
cake
Cake day: August 4th, 2023

help-circle
  • Anyone who mentions, acknowledges, or even doesn’t sufficiently attempt to ignore/deny my existence or influence in the world, anything they eat tastes and feels like eating raw, unseasoned egg whites for a month. And I’ll put knowledge of that fact in everyone’s minds.

    Then I can do whatever I want to annoy the fuck out of people and they have to ignore me. Wet Willie Elon Musk in public while looking him right in the eye. Replace the audio at a Kid Rock concert with Baby Shark for the whole show and everyone has to pretend it was a typical Kid Rock concert. Draw dicks all over Trump’s face with a sharpie during a presidential address on live, national TV. Find every HOA president and kill grass in their front yard in the shape of Bevis and Butthead.

    I wouldn’t be unreasonable. A wry, approving smile here. Stopping and reading an obscene message I planted before realizing it was me. Stuff like that gets a pass. I might even turn a blind eye to an involuntary case of the giggles brought on by my hijinks, particularly if it helps the vibe. Also, anyone under 15 is exempt from the whole egg whites thing and can laugh their asses off and point with impunity.











  • People are spending all this time trying to get good at prompting and feeling bad because they’re failing.

    This whole thing is bullshit.

    So if you’re a developer feeling pressured to adopt these tools — by your manager, your peers, or the general industry hysteria — trust your gut. If these tools feel clunky, if they’re slowing you down, if you’re confused how other people can be so productive, you’re not broken. The data backs up what you’re experiencing. You’re not falling behind by sticking with what you know works.

    AI is not the first technology to do this to people. I’ve been a software engineer for nearing 20 years now, and I’ve seen this happen with other technologies. People convinced it’s making them super productive, others not getting the same gains and internalizing it, thinking they’re to blame rather than the software. The Java ecosystem has been full of shitty technologies like that for most of the time Java has existed. Spring is probably one of the most harmful examples.


  • Just my guess here, but…

    The desktop/laptop sort of form factor is associated in people’s minds with unlocked bootloaders. People expect to be able to install Linux on them if they want to. Tablets, game systems, and other sorts of consumer electronics, not so much. I’m thinking Microsoft will do what it can to push hardware manufacturers and the software industry as a whole more in the direction of the kinds of devices that consumers already expect to be locked down like tablets or game systems that are “streaming” game systems. And that way, the bootloader will prevent folks from switching to Linux.





  • TootSweet@lemmy.worldtoAsklemmy@lemmy.mlWhat is Lemmy's problem with AI?
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    3 months ago

    So many places I could start when answering this question. I guess I’ll just pick one.

    It’s a bubble. The hype is ridiculous. There’s plenty of that hype in your post. The claims are that it’ll revolutionize… well basically everything, really. Obsolete human coders. Be your personal secretary. Do your job for you.

    Make no mistake. These narratives are being pushed for the personal benefit of a very few people at the expense of you and virtually everyone else. Nvidia and OpenAI and Google and IBM and so on are using this to make a quick buck. Just like TY capitalized on (and encouraged) a bubble back around the turn of the millennium that we now look back on with embarrassment.

    In reality, the only thing AI is really effective as is a gimmicky “toy” that entertains as long as the novelty hasn’t worn thin. There’s very little real world application. LLM’s are too unreliable at getting facts straight and not making up BS to be trusted for any real-world use case. Image generating “AI”'s like stable diffusion produce output (and by “produce output” I mean rip off artists) that all has a similar, fakey appearance with major, obvious errors which generally instantly identify it as low-effort “slop”. Any big company that claims to be using AI in any serious capacity is lying either to you or themselves. (Possibly both.)

    And there’s no reason to think it’s going to get better at anything, “AI industry” hype not withstanding. ChatGPT is not a step in the direction of general AI. It’s a distraction from any real progress in that direction.

    There’s a word for selling something based on false promises. “Scam.” It’s all to hoodwink people into giving them money.

    And it’s convincing dumbass bosses who don’t know any better. Our jobs are at risk. Not because AI can do your job just as well or better. But because your company’s CEO is too stupid not to fall for the scam. By the time the CEO gets removed by the board for gross incompetence, it’ll be too late for you. You will have already lost your job by then.

    Or maybe your CEO knows full well AI can’t replace people and is using “AI” as a pretense to lay you off and replace you with someone they don’t have to pay as much.

    Now before you come back with all kinds of claims about all the really real real-world applications of AI, understand that that’s probably self-deception and/or hype you’ve gotten from AI grifters.

    Finally, let me back up a bit. I took a course in college probably back in 2006 or so called “introduction to artificial intelligence”. In that course, I learned about, among other things, the “A* algorithm”. If you’ve ever played a video game where an NPC or enemy followed your character, the A* algorithm or some slight variation on it was probably at play. The A* algorithm is completely unlike LLMs, “generative AI”, and whatever other buzzwords the AI grifting industry has come up with lately. It doesn’t involve training anything on large data sets. It doesn’t require a powerful GPU. When you give it a particular output, you can examine the algorithm to understand exactly why it did what it did, unlike LLMs which produce answers that can’t be tracked down to what training data went into producing that particular response. The A* algorithm has been known and well-understood since 1968.

    That kind of “AI” is fine. It’s provably correct and has utility. Basically, it’s not a scam. It’s the shit that people pretend is the next step on the path to making a Commander Data – or the shit that people trust blindly when its output shows up at the top of their Google search results – that needs to die in a fire. And the sooner the better.

    But then again, blockchain is still plaguing us after like 16 years. So I don’t really have a lot of hope that enough average people are going to wizen up and see the AI scam for what it really is any time soon.

    The future is bleak.


  • I’m a big fan of jq. It’s a domain-specific language for manipulating JSON data.

    ImageMagick is like ffmpeg but for images.

    inotify-tools has command-line utilities that can be used in a Bash script or a Bash one-liner to make arbitrary things “happen” when something “happens” to a file or directory. (Then the file is opened or written to or renamed or whatever.)

    I probably should mention rsync. It’s like a swiss army knife for copying files from one place to another. And it supports “keeping files syncronized” between two locations.

    Of course, there’s tons of stuff that you pretty much can’t talk about Bash scripting without mentioning. Sed, awk, grep, find, etc.

    Also, I totally relate about the terminal giving more dopamine. I kinda just hate going on a point-and-click adventure to do things like image editing or whatever. To the point that I’ve written a whole-ass domain-specific-language to do what I want rather than use Gimp. (And I’m working on another whole-ass domain-specific-language to do a traditionally-GUI-app sort of task.)