

I do it with a gluetun container (more versatile) zero issues, but you can just mainline wireguard as an interface if you prefer, also works fine, on bazzite.
I do it with a gluetun container (more versatile) zero issues, but you can just mainline wireguard as an interface if you prefer, also works fine, on bazzite.
Yarr !, my experience has been stellar ;) (Gaben: It’s a service problem…)
Thinkpads have long had first tier linux support, in fact many models have shipped with linux for at least a decade (?), checking that is a really good way to be sure, but you’re going to be fine with W, P, T, X lines, many enthusiasts make light work. They were deployed (might still be) to Red Hat kernel devs for a long time, which helps things along. Fingerprint drivers tend to be proprietary and hit or miss, but passwords work.
Honestly learning to install linux yourself, and configure it to your liking, is actually, imo, a really important path to learning and you’re likely doing yourself a disservice avoiding it. It’s part of the avoidance of vendor lock in you want. Installation is surprisingly easy now, start with something simple, Mint is often recommended these days, find a decent, recent, youtube and you’ll probably be up and running in an hour. Find the apps you need for your workflow (which will take considerably longer). Get familiar with the terminal. Best thing you can do after that is burn it down and install a new distro, leaving any mistakes behind, keeping your list of apps. Arch if you want to get really deep into it, or Fedora / Bazzite are good choices and very stable. Best of luck.
Perhaps not saved, but I’d venture the most significant nail in the coffin of the scientific publishing mafia so far, pursued with integrity and honor. The rise of open publishing that followed is very telling, and in my mind directly attributable to Alexandra’s work and it’s popularity, they know they need to adapt or (probably and) die.
Still need to work on the publish or perish mentality, getting negative results published, and getting corporate propaganda out of the mix, to name a few.
You can cycle the smaller drives to cold backup, that’s not a waste. You do have backups, which RAID is not, right?
Sure, works fine for inference with tensor parallelism, USB4 / thunderbolt 4/5 is a better (40Gbit+ and already there) bet than ethernet (see distributed-llama). Trash for training / fine tuning, that needs higher inter GPU speed, or better a bigger GPU VRAM.
Seems like data integrity is your highest priority, and you’re doing pretty well, the next step is keeping a copy offsite. It’s the 3-2-1 backup strategy, 3 copies, 2 media (used to mean CDs etc but now think offline drives) 1 offsite (in case of fire, meteor strike etc), so look to that, stash a copy at a friends or something.
In your case I’d look at getting some online storage to fill the offsite role while you’re overseas (paid probably, but a year of 1 or 2 Tb is quite reasonable) leaving you with no pressure on the selfhosting side, just tailscale in, muck around and have fun, and if something breaks, no harm done, data safe.
I’ve done it for what seems like forever and I’d still be worried about leaving a system out of physical control for any extended period of time, at the very least having someone to reboot it if connectivity or power fails will be invaluable, but talking them through a broken update is another thing entirely, and you shouldn’t make that a critical necessity, too much stress.
I say go for the desktop for grunty work and pick up an older thinkpad for the mobile use case or just remote in with your macbook. I have a T580 (last of the dual batteries, infinite battery life baby), works an absolute treat on linux and next best build quality to a macbook but with a repair manual and massive upgradeability.
Would’ve sworn Potatohead/Voldemort was a caretaker and they’d switch for the election. Instead the shit stain is neck and neck (according to Murdoch polls). Wtf is wrong with this country, we see Trump and go ‘hold my beer’. May not be the worst timeline, but it ain’t good.
I run a gluetun docker (actually two, one local and one through Singapore) clientside which is generally regarded as pretty damn bulletproof kill switch wise. The arr stack etc uses this network exclusively. This means I can use foxyproxy to switch my browser up on the fly, bind things to tun0/tun1 etc, and still have direct connections as needed, it’s pretty slick.
Don’t sleep on switching to nvme.
The old adage is never use v x.0 of anything, which I’d expect to go double for data integrity. Is there any particular reason ZFS gets a pass here (speaking as someone who really wants this feature). TrueNAS isn’t merging it for a couple of months yet, I believe.
Yup (although minutes seems long and depending on usage weekly might be fine). You can also combine it with updates which require going down anyway.
You’ll be wanting sudo ostree admin pin 1 seeing as 0 was broken. Double check with rpm-ostree status.
Proceed to rpm-ostree update, if that does nothing it means 0 is up to date, personally I’d just wait for a new update using the working deployment, but you can blow away 0 and get it again if you’re keen.
Basically, you want to shut down the database before backing up. Otherwise, your backup might be mid-transaction, i.e. broken. If it’s docker you can just docker-compose down it, backup, and then docker-compose up, or equivalent.
I get it, but I contend my suggestion would allow exactly that, without relying on the opinion of some internet rando. YMMV.
Find a list of books you like, find an entry that interests you, go to anna’s archive. Why overcomplicate it?
Sure is, use a vpn obs.
I just use the save to zotero extension in firefox and backup the directory, works fine. Maybe you’re overthinking?
Big if true!