

Ironic that, by upvoting this comment in agreement, I’m doing the opposite of what you advocate for…
Ironic that, by upvoting this comment in agreement, I’m doing the opposite of what you advocate for…
Intentionally and knowingly calling a MTF trans person a man is transphobia. Dunno about jail, but I’d be down to have legally enforced punishment for that. To be fair, that should probably cover all cases of (intentionally and knowingly) misgendering people, in a similar fashion to defamation.
If they’re ignored files, setting them up locally won’t end up in the repo. If you put a symlink into the repo, fixing that for your setup will register as a change within git, which can cause annoyance and even problems down the line.
Literally the last two RSS items right now are about how splitting packages will require intervention for some users (plasma and Linux firmware).
Maybe a nitpick, but the linux-firmware situation is different, it’s not about needing to install extra packages (they turned the existing package into a meta package or whatever it’s called), but about that coinciding with some changes that can break the upgrade process and require you to force uninstall a package before proceeding.
But yeah, good point about plasma, the only differences I can even think of are that plasma is probably more popular, and definitely more important to have working.
While I agree with the rest, does Lutris have backup options? I never actually checked, but don’t remember seeing any of that
This feels like surreal memes before they turned into almost entirely misspellings and other repeat jokes.
Overproduce to cover everybody’s needs, and if you want to use that overproduction to cover somebody else’s problems, make that the new target and produce over it to keep a safety margin. Otherwise you’re just going to hide the problem and run into trouble when production dips.
Not saying this is the right approach, but this is the idea I’m getting from the thread. I feel like it might not work with the economics of supply and demand combined with capitalistic greed, but if a margin exists as safety, allocating it removes that safety.
I think the point is that if you do that, then you’re just increasing the amount of people in the equation, and if they become dependent on you and the production drops, somebody will be lacking food again.
I had the impression cloud was about the opposite - detaching your server software from physical machines you manage, instead paying a company to provide more abstracted services, with the ideal being high scalability by having images that can be deployed en masse independent of the specifics of where they’re hosted and on what hardware. Pay for “storage”, instead of renting a machine with specific hardware and software, for example.
I think the trick might be that nothing is stopping you from using more than one 32-bit integer to represent addresses and the kernel maps memory for processes in the first place, so as long as each process individually can work within the 32-bit address space, it’s possible for the kernel to allocate that extra memory to processes.
I do suppose on some level the architecture, as in the CPU and/or motherboard need to support retrieving memory using more than 32 bits of address space, which would also be what somebody else replied, and seems to be available since 1999 on both AMD and Intel.
I don’t know enough to say how accurate the numbers are, but the sentiment stands - if it’s a password you’re memorizing, longer password will probably be better.
Dual booting is problematic, as mentioned you’re messing with your partitions and could mess up your windows partition, but also windows can, unprompted, mess up your Linux bootloader. As long as you’re careful with partitions and know how to fix your bootloader from a live image, there’s no real issue, but it’s worth keeping in mind.
By the way, I recommend rEFInd for the bootloader when dual booting, it doesn’t require configuration and will detect bootable systems automatically.
A VM sounds like a good idea to try a few things out, but do keep in mind performance can suffer, and you might especially run into issues with things like GPU virtualization. If you want to properly verify if things work and work well enough, you’ll want to test them from a live system.
As a final note, you can give your VM access to your SSD/HDD - if you set that up properly, you can install and boot your Linux install inside a VM, and later switch to booting it natively. You still have the risk of messing up your partitions in that case, but it can be nice so you can look things up on your host system while setting up Linux in a VM.
The steam version of trackmania is quite weird - I looked for a way to pay for it through steam for a while before resignedly going into the Ubisoft payment in the overlay… Only to be directed to steam for payment. I’m not sure if it’s even possible to pay through Ubisoft when launching it from steam.
I got the impression that the PolyMC situation was quite different, with that developer masking it and doing a minority of the work, but after one change made by the rest of the developers they snapped, used their control over the repository to remove the rest of the maintainers and take sole control over the repository.
I was aware of some shenanigans and hostility from PolyMC and never used it, but I got the impression there were no major outward signs before that happened?
Yes, but the issue is that the lemmy.ml maintainers are the Lemmy developers. The people who created the software, and continue maintaining it, are the same people who created the instance.
It’s the well-known question of if you can, or should, separate the art from the artist. By funding, in its current state, the development of Lemmy, you’re supporting the people running an instance you disagree with. Unless you’re willing to take up the work to fork and continue development yourself, you can’t detach them, you either support both or none.
Outer Wilds. The game isn’t very text-heavy, but what there is feels important and personal. With the way the story is told, it is quite possibly my favorite story overall. I don’t want to say too much, since knowledge is key in that game, but I would highly recommend it.
I’m not sure if this is what you mean, but I do want to clarify - the drivers in the repository are still proprietary drivers from Nvidia, just tested and packaged by the distribution maintainers, dkms is just some magic that lets them work with arbitrary kernels with minimal compilation. Unless you’re using nouveau, which I don’t think is ready for most uses.
I’d definitely recommend against using drivers downloaded from a website, on general principles.
custom kernels don’t work with the drivers from apt
Check if there’s a dkms version - I know that’s the way it’s set up on Arch, if using a non-standard kernel you install the kernel headers, and dkms lets you build just the module for your kernel.
…Brave is just chromium by techbros, right?
If it’s already distorted, switching to a different distortion that’s area-preserving can still be an improvement.