I was indeed setting up nvidia and cuda for ML around 2018 and it was not as straight forward or easy as it is today. It was quite annoying and error prone, at least for me who was setting it up on my own for the first time.
Just a stranger trying things.
I was indeed setting up nvidia and cuda for ML around 2018 and it was not as straight forward or easy as it is today. It was quite annoying and error prone, at least for me who was setting it up on my own for the first time.
The best insight I remember reading about questions as MFA, is to consider the answer as a password. If you use a password manager, don’t feel forced to use actually true answers. The answer doesn’t have to be true, you just need to know it. Use a password manager and invent answers which you store. This is so much more secure than relying on the truth.
Edit: others mention the same thing.
@demigodrick@lemmy.zip
Perhaps of interest? I don’t know how many bots you’re facing.
I feel you are a bit out of touch when the topic is specifically enshittification and that it is based on the history of companies turning against their users, showing little good faith. It is also not something which is sparing open source projects (remember bitwarden’s attempt?). So sure, I’m not going to deny that I’m making assumptions and that I am concerned it may one day happen. But it is grounded in reality, not some tinfoil hat stuff.
Edit: and the fact that bitwarden did not eventually go through with it does not counter the fact that they intended to and tried. Sometimes companies back off and play the long game and try to be more subtle about it.
There is no guarantee headscale can keep working the way it does or that it is allowed to keep existing.
Edit: FYI headscale is not at all at feature parity with what tailscale offers.
Congrats! Amazing project, exciting interface and you went the extra mile on the integration side with third parties. Kudos!
Edit: I’ll definitely have to try it out!
Perhaps give Ramalama a try?
Indeed, Ollama is going a shady route. https://github.com/ggml-org/llama.cpp/pull/11016#issuecomment-2599740463
I started playing with Ramalama (the name is a mouthful) and it works great. There is one or two more steps in the setup but I’ve achieved great performance and the project is making good use of standards (OCI, jinja, unmodified llama.cpp, from what I understand).
Go and check it out, they are compatible with models from HF and Ollama too.
Oh yes the 2.0 was also great! They added the rewind feature which is pretty cool.
Both of them can work on the steam deck, though they don’t out of the box.
For dirt you have to add a launch option WINE_CPU_TOPOLOGY=4
https://github.com/Open-Wine-Components/umu-protonfixes/issues/328
And for dirt 2 you need to add a fix for Xbox live.
https://www.reddit.com/r/SteamDeck/comments/10ehkot/has_any_one_gotten_colin_mcrae_dirt_2_working/
https://github.com/ThirteenAG/Ultimate-ASI-Loader/releases/download/Win32-latest/xlive-Win32.zip
You’re a gem of a contributor, love the style and content and thank you for making this specifically for Lemmy, much appreciated! :)
Edit: and regarding what I’m playing lately, I’m playing Colin McRae Dirt from 2007 which I recently managed to get working on the steam deck. A fun racing game!
I find this to be great and am particularly comforted by the following:
Can my SteamOS Compatibility test results be worse than Deck Verified?
No. SteamOS Compatibility results will all be the same or higher than Steam Deck Verified results.
Funnily, my mind first made me consider pressure by Microsoft to raise the steam version cost to make it a less attractive proposition compared to the windows version, before realizing this was a tariff reaction. Goes to show how I imagine Microsoft’s powers and intentions in my mind :)
Sorry didn’t mean to sound condescending, but capacitors can indeed output their charge at extremely high rates but have terrible energy storage capacity. You would need an unreasonably large capacitor bank, but it is technically feasible as that’s what the CERN has. But in this case batteries are a more suitable option, they can be tuned between energy and power to fit the exact use case more appropriately.
Capacitors, lol
Isn’t that something you solve with snooze? Like put the alarm for the earlier time, set the snooze time to 15min and hit snooze until you want to wake up?
Remove unused conda packages and caches:
conda clean --all
If you are a Python developer, this can easily be several or tens of GB.
Quick note, in some countries you can get refurbished steam decks:
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?
I think the requested salary amount plays a big role. If a typical 100k annual role was rejected on salary misalignments despite requesting 60k, I would be much more critical of the company.
It’s on the very first page, opposite to the office server page, and they acknowledge the Author does not exist and that it’s basically an ad for Windows server.