

My biggest shortcoming at the moment is my NAS is also my gaming PC. It’s pretty inefficient to have that on all the time. But I haven’t had the time to build a dedicated NAS.
Professional software engineer, musician, gamer, stoic, democratic socialist
My biggest shortcoming at the moment is my NAS is also my gaming PC. It’s pretty inefficient to have that on all the time. But I haven’t had the time to build a dedicated NAS.
ntopng has all of that. I’m currently hosting it on my home router.
I definitely cannot get behind the “no recursion” rule. There are plenty of algorithms where the iterative equivalent is significantly harder and less natural. For example, post-order DFS.
I guess maybe when lives depend on it. But they should be testing and fuzzing their code anyway, right?
EDIT: I can’t even find in the NASA PDF where it mentions recursion.
If you go this route I recommend installing Kodi + Jellyfin Plugin + Kore Android App. You can control everything from your phone or laptop.
Might as well shorten the terms so we can get more gold coins out of it.
It’s almost 30 years old. Not to knock cURL, it’s a staple for sure.
HTTPie and xh
claim to have a more intuitive UX. If the functionality is comparable, I choose tools written in memory-safe languages by default.
xh
is a nice modern alternative.
There is actually a JS library called Planktos that can serve static websites over BitTorrent. I don’t know how good it is, but it sounds like a starting point.
deleted by creator
Organizationally, you don’t want your API handler to care about implementation details like database queries. All DB interaction should be abstracted into a separate layer.
Generally API handlers only care about injecting any “global” dependencies (like a database object), extracting the request payload, and dispatching into some lower-level method.
None of this requires generic code. It’s just about having a clear separation of concerns, and this can lead to more reusable and testable code.
Because they often won’t let you.
Has a simple backup and migration workflow. I recently had to backup and migrate a MediaWiki database. It was pretty smooth but not as simple as it could be. If your data model is spread across RDBMS and file, you need to provide a CLI tool that does the export/import.
Easy to run as a systemd service. This is the main criteria for whether it will be easy to create a NixOS module.
Has health endpoints for monitoring.
Has an admin web UI that surfaces important configuration info.
If there are external service dependencies like postgres or redis, then there needs to be a wealth of documentation on how those integrations work. Provide infrastructure as code examples! IME systemd and NixOS modules are very capable of deploying these kinds of distributed systems.
Silverbullet is nice
Good point. There are some where it’s just a few miscellaneous files missing.
I have so many files that have been stalled at >95% for months.
Cool so this article calls out various types of coupling and offers no strategies for managing it.
Waste of time.
I like AdGuard Home myself.
Wireguard is p2p.
EDIT: I guess the point is it’s doing peer discovery without static public IPs or DNS. Pretty cool!
I’m perfectly happy to build my own NAS with NixOS and ZFS on it. I think it’s mostly a matter of getting the right hardware.