• 1 Post
  • 405 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • The real issue is devs not wanting to pay for hosting server side anticheat. I

    Or allowing self hosted servers. With actual mods that just ban people who are being jerks, and basic anticheat tools shipped to them.


    Whatever the issue and solution, the current state of the gaming market still makes mass linux gaming kind of impossible. Not from the anticheat games specifically as much as the OEM problem.




  • brucethemoose@lemmy.worldtoProgramming@programming.devLLMS Are Not Fun
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    edit-2
    7 days ago

    Mmmmm. Pure “prompt engineering” feels soulless to me. And you have zero control over the endpoint, so changes on their end can break your prompt at any time.

    Messing with logprobs and raw completion syntax was fun, but the US proprietary models took that away. Even sampling is kind of restricted now, and primitive compared to what’s been developed in open source.


  • brucethemoose@lemmy.worldtoProgramming@programming.devLLMS Are Not Fun
    link
    fedilink
    arrow-up
    34
    arrow-down
    9
    ·
    edit-2
    7 days ago

    If you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time. Micromanaging them, watching to preempt slop and derailment, is frustrating and rage-inducing.

    Finetuning LLMs for niche tasks is fun. It’s explorative, creative, cumulitive, and scratches a ‘must optimize’ part of my brain. It feels like you’re actually building and personalizing something, and teaches you how they work and where they fail, like making any good program or tool. It feels you’re part of a niche ‘old internet’ hacking community, not in the maw of Big Tech.

    Using proprietary LLMs over APIs is indeed soul crushing. IMO this is why devs who have to use LLMs should strive to run finetunable, open weights models where they work, even if they aren’t as good as Claude Code.

    But I think most don’t know they exist. Or had a terrible experience with terrible ollama defaults, hence assume that must be what the open model ecosystem is like.






  • CUDA is actually pretty cool, especially in the early days when there was nothing like it. And Intel/AMD attempts at alternatives have been as mixed as their corporate dysfunction.

    And Nvidia has long has a focus on other spaces, like VR, AR, dataset generation, robotics, “virtual worlds” and such. If every single LLM thing disappeared overnight in a puff of smoke, they’d be fine; a lot of their efforts would transition to other spaces.

    Not that I’m an apologist for them being total jerks, but I don’t want to act like CUDA isn’t useful, either.


  • Yeah I mean you are preaching to the choir there. I picked up a used 3090 because rocm on the 7900 was in such a poor state.


    That being said, much of what you describe is just software obstinacy. AMD (for example) has had hardware encoding since early 2012, with the 7970. Intel quicksync has long been a standard on laptops. It’s just a few stupid propriety bits that never bothered to support it.

    CUDA is indeed extremely entrenched in some areas, like anything involving PyTorch or Blender’s engines. But there’s no reason (say) Plex shouldn’t support AMD, or older editing programs that use OpenGL anyway.