• 1 Post
  • 200 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • This is why work/life balance is so important. I wouldn’t ever call myself “well-off” but I don’t have kids and my job allows me ample time off to play games and watch movies and shit.

    Neither do they! They aren’t workaholics, they’re home bodies that work the least they can!

    It’s just that the workplaces are shit. One went back to mandated RTO for no reason even though much of the work is overseas at odd hours. The company’s literally trying to make employees miserable so they quit without severence. The other is work-from-home, but with enough pointless meetings and complete workplace dysfunction to eat energy.

    And these seem like well above average jobs.



  • +1 to literally everything.

    Fuck brand recognition or loyalty, fuck development talent, fuck community building, fuck long-term strategy, we can realize a gain right now by sowing half the planet with salt, so that’s what we’re going to do. So what is there for people to buy?

    I wish this would fit on a bumpersticker.

    That noise you heard last week was Xbox’s death rattle. One out of the three mainstream home console platforms is an outright stupid idea to buy now.

    And wasn’t Sony the big risk of bowing out before? And then we got the Switch 2… It’s remarkable that Microsoft somehow made Xbox the least likely to survive.


  • Single data point: my young, working, well off gaming part of my family is just out of energy. It’s easier to watch a YouTube video instead of TV or gaming, before then falling asleep to wake up for work. Seems like much of their circle is similar.

    As for myself, I’m going through a, uh, icky phase of life and am not really motivated to play unless it’s coop.

    …Maybe others are struggling similarly?


    Also, the games we do look at tend to be from indie to mid-size studios, with BG3 and KCD2 being the only recent exceptions.






  • Yeah, just paying for LLM APIs is dirt cheap, and they (supposedly) don’t scrape data. Again I’d recommend Openrouter and Cerebras! And you get your pick of models to try from them.

    Even a framework 16 is not good for LLMs TBH. The Framework desktop is (as it uses a special AMD chip), but it’s very expensive. Honestly the whole hardware market is so screwed up, hence most ‘local LLM enthusiasts’ buy a used RTX 3090 and stick them in desktops or servers, as no one wants to produce something affordable apparently :/






  • I don’t understand.

    Ollama is not actually docker, right? It’s running the same llama.cpp engine, it’s just embedded inside the wrapper app, not containerized. It has a docker preset you can use, yeah.

    And basically every LLM project ships a docker container. I know for a fact llama.cpp, TabbyAPI, Aphrodite, Lemonade, vllm and sglang do. It’s basically standard. There’s all sorts of wrappers around them too.

    You are 100% right about security though, in fact there’s a huge concern with compromised Python packages. This one almost got me: https://pytorch.org/blog/compromised-nightly-dependency/

    This is actually a huge advantage for llama.cpp, as it’s free of python and external dependencies by design. This is very unlike ComfyUI which pulls in a gazillian external repos. Theoretically the main llama.cpp git could be compromised, but it’s a single, very well monitored point of failure there, and literally every “outside” architecture and feature is implemented from scratch, making it harder to sneak stuff in.


  • OK.

    Then LM Studio. With Qwen3 30B IQ4_XS, low temperature MinP sampling.

    That’s what I’m trying to say though, there is no one click solution, that’s kind of a lie. LLMs work a bajillion times better with just a little personal configuration. They are not magic boxes, they are specialized tools.

    Random example: on a Mac? Grab an MLX distillation, it’ll be way faster and better.

    Nvidia gaming PC? TabbyAPI with an exl3. Small GPU laptop? ik_llama.cpp APU? Lemonade. Raspberry Pi? That’s important to know!

    What do you ask it to do? Set timers? Look at pictures? Cooking recipes? Search the web? Look at documents? Do you need stuff faster or accurate?

    This is one reason why ollama is so suboptimal, with the other being just bad defaults (Q4_0 quants, 2048 context, no imatrix or anything outside GGUF, bad sampling last I checked, chat template errors, bugs with certain models, I can go on). A lot of people just try “ollama run” I guess, then assume local LLMs are bad when it doesn’t work right.



  • brucethemoose@lemmy.worldtoSelfhosted@lemmy.worldI've just created c/Ollama!
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    edit-2
    15 days ago

    TBH you should fold this into localllama? Or open source AI?

    I have very mixed (mostly bad) feelings on ollama. In a nutshell, they’re kinda Twitter attention grabbers that give zero credit/contribution to the underlying framework (llama.cpp). And that’s just the tip of the iceberg, they’ve made lots of controversial moves, and it seems like they’re headed for commercial enshittification.

    They’re… slimy.

    They like to pretend they’re the only way to run local LLMs and blot out any other discussion, which is why I feel kinda bad about a dedicated ollama community.

    It’s also a highly suboptimal way for most people to run LLMs, especially if you’re willing to tweak.

    I would always recommend Kobold.cpp, tabbyAPI, ik_llama.cpp, Aphrodite, LM Studio, the llama.cpp server, sglang, the AMD lemonade server, any number of backends over them. Literally anything but ollama.


    …TL;DR I don’t the the idea of focusing on ollama at the expense of other backends. Running LLMs locally should be the community, not ollama specifically.