

Friends don’t let friends run erasure coding on BTRFS.
Personally, I don’t run anything on BTRFS. I like having my data intact and I also want two parity drives in my pools.


Friends don’t let friends run erasure coding on BTRFS.
Personally, I don’t run anything on BTRFS. I like having my data intact and I also want two parity drives in my pools.


For anyone working on or around stages:
Most sane production companies standardise on over-under. Even if you find some other method superior (nothing is), you’ll get thrown out headfirst if you don’t follow the standard. Having a tech fuck around with a non-compliant cable during a changeover is far too risky.
Should be noted that there are special cases. For example, thicccc cables (i.e. 24ch analog multi) that have their own dedicated cases often go down in an 8 instead - easier to pull out and you can use a smaller case. Thank god for digital audio.
(Also, when using over-under correctly, you can throw the cable and it will land straight without any internal stresses winding it up like a spring)
Here I am, running separate tailscale instances and a separate reverse proxy for like 15 different services, and that’s just one VM… All in all, probably 20-25 tailscale instances in a single physical machine.
Don’t think about Tailscale like a normal VPN. Just put it everywhere. Put it directly on your endpoints, don’t route. Then lock down all your services to the tailnet and shut down any open ports to the internet.


My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.


Oh, have they started working on aviation grade test harnesses?
SQLite will rule our world for a long time, far after we are gone.


SQLite is one of the very few open source projects with a reasonable plan for monetisation.
Basically, squeeze regulated industries, hard.
I’m all for open source, but at some point developers should stop acting surprised when people use their work at the edges of the licence terms (Looking at you Mongo, Redis and Hashicorp). As for developers running projects on their free time, maybe start setting boundaries? The only reason companies aren’t paying is because they know they can get away withholding money, because some sucker will step up and do it for free, ”for the greater good”. Stop letting them get it for free.
Looks like RedHat is kinda going in this direction (pay to get a paper trail saying a CVE-number is patched), and basically always have been squeezing regulated industry. Say what you want about that strategy, it’s at least financially viable long term. (Again, looking at you Hashicorp, Redis, Mongo, Minio and friends)


Depending on what plugins and software OP runs, that might not be possible or at least kinda annoying. The music production software industry loves to require phone home with regular intervals for licensing.


It’s 2025. Any internet connected machine on any EOL OS or without updates applied in a timely manner should get nuked from orbit.
And that goes for all Linux and Android users out there too. Update your bloody phones.
I have a Windows 10 machine with firewalls, updates and antivirus all turned off, for a single specific software. Works fine, and will keep working fine for a long time, but that installation will never again see a route to the internet.


I know it’s possible to run music production on Linux, in fact it’s better than ever.
But:
Please stop recommending Linux to people who aren’t ready for it yet. Find the people who are, get them over. The rest will follow.


You can put it in the dishwasher to clean it. Just make sure to dry it and oil it a bit afterwards, otherwise it will rust. In most countries, this is covered by structured teaching in chemistry, contained within the concept of ”school”.


You can probably pay for a dishwasher.


For music production on a hobby level? Linux is not what you want.
The VST availability is abysmal. For a DAW, you can choose between Reaper and Ardour. Both are reasonably good, but without decent third party VSTs you’ll suffer. You won’t get iLok working, you won’t get any commercial plugins working. Your old project files won’t open.
Now, if you are exclusively working with Airwindows plugins (look it up!) in Reaper, you could get away with a Linux migration. Cakewalk and Ableton? Not a chance in hell.
Go buy a cheap used 16GB M1 Mac Mini. Music production stuff ”just works”. Given your config, looks like that could be within budget. Or upgrade your old machine to Windows 11, pick your poison.


Fine, take the structured approach to ”Linux”:
If that investment seems a bit steep, take only the last step, build a homelab and take a structured approach to any interesting subjects you encounter doing that.


Structured approach to what? You don’t take a structured approach to a hammer, you use it as a tool to accomplish something.
”The Linux Programming Interface” is an excellent book, if you are interested in interacting with the Linux kernel directly, but somehow I doubt that’s what OP wants to do. I doubt OP knows what he wants to do.
Besides, please note that I did encourage taking a structured approach to stuff discovered on the way. But taking a structured approach to ”Linux” is just a bad idea, it’s far to broad of a topic.
Edit: RedHat has their certification programs. These are certainly structured. You’ll get to know RedHat and the RedHat^{TM} certified way of doing things. That’s probably the closest thing to what OP wants. You even get a paper at the end if you pay up. This is not the most efficient way to get proficient.


You are probably approaching this from the wrong angle. Linux, and computers in general, are tools. Figure out what you want to use it for, and then do it. One example would be to build a homelab with jellyfin and nextcloud.
On the path to that goal, you’ll find problems and tasks for which there exists very nice structured resources. For example, you might want some security, a perfect opportunity to read a book on networking and firewalls.


Everytime someone says something positive about BTRFS I’m compelled to verify whether RAID6 is usable.
The RAID 5 and RAID 6 modes of Btrfs are fatally flawed, and should not be used for “anything but testing with throw-away data.”
Alas, no. The Arch wiki still contains the same quote, and friends don’t let friends store data without parity.
So in the end, the best BTRFS can do right now is running RAID10 for a storage efficiency of 50%. Running dedup on that feels a bit wasteful…
(Sidenote: actually, ZFS runs dedup after per block compression, so it can only dedup blocks that are identical. Still works though, unlike when people do user level .tar.gz-style compression. The it’s game over.)


Yup. Apparently got much better last year, but don’t turn it on unless you know what you ate doing.


No idea how flatpak or snap works here (I want my rpm:s dammit) but I bet someone started adding compression to something at some point.
You can’t deduplicate already compressed data, except in theory. If you want deduplication, do that first, then compress the data. (i.e. use ZFS. Friends don’t let friends use subpar filesystems.)


the H200 has a very impressive bandwith of 4.89 TB/s, but for the same price you can get 37 TB/s spread across 58 RX 9070s, but if this actually works in practice i don’t know.
Your math checks out, but only for some workloads. Other workloads scale out like shit, and then you want all your bandwidth concentrated. At some point you’ll also want to consider power draw:
Now include power and cooling over a few years and do the same calculations.
As for apples and oranges, this is why you can’t look at the marketing numbers, you need to benchmark your workload yourself.
For the record - analog multis can burn in hell. Nowadays, not running all of the show over Cat6 should be criminal.