

Yeah, it’s not a great idea to leave it out for hours and hours; I usually portion and freeze a half hour or so after cooking – it’s usually cooled off enough that I can handle it by then.
Interests: programming, video games, anime, music composition
I used to be on kbin as e0qdk@kbin.social before it broke down.


Yeah, it’s not a great idea to leave it out for hours and hours; I usually portion and freeze a half hour or so after cooking – it’s usually cooled off enough that I can handle it by then.


Cooked plain rice freezes well too. I cook a big batch and use a small bowl to split it into individual portions. I wrap those in a little plastic wrap, and freeze it. ~2 mins in the microwave (reusing the wrap as a cover for the bowl) and I’ve got almost-as-good-as-fresh rice.
Here’s one of mine. I got annoyed at the complexity of other command line spellcheckers I tried and replaced them with this simple python script for when I just want to check if a single word is correct:
#!/usr/bin/env python3
import sys
try:
query = sys.argv[1].lower()
except Exception:
print("Usage: spellcheck <word>")
exit(1)
with open("/usr/share/dict/words") as f:
words = f.readlines()
words = [x.strip().lower() for x in words if len(x.strip()) > 0]
if not query in words:
print("Not in dictionary -- probably a typo")
exit(1)
else:
print("OK")
exit(0)


No; I don’t use AI at all for programming currently.


This takes a snapshot of the HTML elements from when they were loaded in your browser. If the page loads content dynamically, HTTrack won’t save it but this can. (i.e. this works better on crappy modern sites that need JS to even just load the article text…)


It stores the actual HTML structure and assets, so you can still view the page as it was more-or-less intended instead of it getting split up across print pages.


I’m not sure, but this is what my map looks like currently.


Huh. Maybe I’m just too early in the game still (despite having put 10+ hours into it) but crafting materials are like, the one thing I’m not hurting for. It’s got the Zelda rupee problem for me, at least at this point in the game – I’m constantly pegged at the max capacity, and it feels like I have basically nothing to use it on. (I mean, I do use the tools I have found so far, situationally, but I don’t think I’ve been down by more than ~100 or so from max other than for that one wish in the starting area.)
Edit: Ok, I’m a bit further in now, and I see what people mean… -.-


There’s something else going on there besides base64 encoding of the URL – possibly they have some binary tracking data or other crap that only makes sense to the creator of the link.
It’s not hard to write a small Python script that gets what you want out of a URL like that though. Here’s one that works with your sample link:
#!/usr/bin/env python3
import base64
import binascii
import itertools
import string
import sys
input_url = sys.argv[1]
parts = input_url.split("/")
for chunk in itertools.accumulate(reversed(parts), lambda b,a: "/".join([a,b])):
try:
text = base64.b64decode(chunk).decode("ascii", errors="ignore")
clean = "".join(itertools.takewhile(lambda x: x in string.printable, text))
print(clean)
except binascii.Error:
continue
Save that to a file like decode.py and then you can you run it on the command line like python3 ./decode.py 'YOUR-LINK-HERE'
e.g.
$ python3 ./decode.py 'https://link.sfchronicle.com/external/41488169.38548/aHR0cHM6Ly93d3cuaG90ZG9nYmlsbHMuY29tL2hhbWJ1cmdlci1tb2xkcy9idXJnZXItZG9nLW1vbGQ_c2lkPTY4MTNkMTljYzM0ZWJjZTE4NDA1ZGVjYSZzcz1QJnN0X3JpZD1udWxsJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV90ZXJtPWJyaWVmaW5nJnV0bV9jYW1wYWlnbj1zZmNfYml0ZWN1cmlvdXM/6813d19cc34ebce18405decaB7ef84e41'
https://www.hotdogbills.com/hamburger-molds/burger-dog-mold
This script works by spitting the URL at ‘/’ characters and then recombining the parts (right-to-left) and checking if that chunk of text can be base64 decoded successfully. If it does, it then takes any printable ASCII characters at the start of the string and outputs it (to clean up the garbage characters at the end). If there’s more than one possible valid interpretation as base64 it will print them all as it finds them.


It doesn’t actually include all the media, and – I think – edit history. It does give you a decent offline copy of the articles with at least the thumbnails of images though.
Edit: If you want all the media from Wikimedia Commons (which may also include files that are not in Wikipedia articles directly) the stats for that are:
Total file size for all 126,598,734 files: 745,450,666,761,889 bytes (677.98 TB).
according to their media statistics page.
Nginx is running in Docker
Are you launching the container with the correct ports exposed? You generally cannot make connections into a container from the outside unless you explicitly tell Docker that you want it to allow that to happen… i.e. assuming you want a simple one-to-one mapping for HTTP and HTTPS standard ports are you passing something like -p 80:80 -p 443:443 to docker run on the command line, adding the appropriate ports in your compose file, or doing something similar with another tool for bringing the container up?
I’ve put drives into standby mode with the gnome disks GUI tool on my regular desktop when they were being noisy and I wanted some peace for a while. If the drive was mounted before I put it to sleep, trying to access something on the disk will cause it to spin back up.


I got the KVM used in good condition. It’s an older model – but I went with it anyway since it was a drop-in replacement for my 2-port setup and getting it used was much cheaper than their newer models. The newer ones support higher resolution/frame rates though, I think; I know this one won’t do frame rates above 60FPS properly even though my monitor is capable of it when plugged in directly.
There’s a few quirks with this setup. Probably most annoying is that moving the mouse typically causes computers to wake from sleep (like pressing a key on a keyboard normally does…); I think there’s a way to mask that event off with udev rules but, eh, even a decade or so after getting the original 2-port KVM I haven’t cared enough to actually bother working it out, so I guess it’s not that big of an issue to me… :p


I upgraded my KVM switch recently to one with more ports (from 2 to 4). I used to have to get up and physically rewire the audio and switch the video input selection on my monitor when I wanted to use it instead of my main Linux desktop or ancient Win7 PC before. Now that I can just type a couple keystrokes to switch back and forth I’m actually using my Deck way more often…


Are you running different versions of the software? (e.g. different versions of ffmpeg, maybe?)


I don’t like Anubis because it requires me to enable JS – making me less secure. reddthat started using go-away recently as an alternative that doesn’t require JS when we were getting hammered by scrapers.


netstat -tp – that’ll show you TCP connections and the associated program, doing a DNS lookup for the IPs they’re connected to. You may need elevated permissions to see what some processes are.
There are a bunch of other options (e.g. -n to get numeric output instead of looking up names, -l to get programs listening for incoming connections, etc); check the man pages for more details.


I just right click on the terminal to change the profile to whatever I feel like it should be in the moment (usually red). I do it by reflex, basically. I never felt the need to try to set up automation for different servers, but I expect there’s probably a way to do that if you really wanted to.


“You love the robot more than me!” 💔️
Just run a web server and expose the specific files you want to share through that?