

I don’t have a phone that can scan QR codes.
QR codes are a plain text encoding scheme. If you can screenshot it, you have access to FOSS software that can decode it, and you can paste that URL into your browser.
I don’t have a phone that can scan QR codes.
QR codes are a plain text encoding scheme. If you can screenshot it, you have access to FOSS software that can decode it, and you can paste that URL into your browser.
From a business perspective it makes sense, to throw all the rendering to the devices to save cost.
Not just to save cost. It’s basically OS-agnostic from the user’s point of view. The web app works fine in desktop Linux, MacOS, or Windows. In other words, when I’m on Linux I can have a solid user experience on apps that were designed by people who have never thought about Linux in their life.
Meanwhile, porting native programs between OSes often means someone’s gotta maintain the libraries that call the right desktop/windowing APIs and behavior between each version of Windows, MacOS, and the windowing systems of Linux, not all of which always work in expected or consistent ways.
Honestly, this is an easy way to share files with non-technical people in the outside world, too. Just open up a port for that very specific purpose, send the link to your friend, watch the one file get downloaded, and then close the port and turn off the http server.
It’s technically not very secure, so it’s a bad idea to leave that unattended, but you can always encrypt a zip file to send it and let that file level encryption kinda make up for lack of network level encryption. And as a one-off thing, you should close up your firewall/port forwarding when you’re done.
Yeah, if OP has command line access through rsync then the server is already configured to allow remote access over NFS or SMB or SSH or FTP or whatever. Setting up a mounted folder through whatever file browser (including the default Windows Explorer in Windows or Finder in MacOS) over the same protocol should be trivial, and not require any additional server side configuration.
Yeah, I mean I do still use rsync for the stuff that would take a long time, but for one-off file movement I just use a mounted network drive in the normal file browser, including on Windows and MacOS machines.
To be honest, no. I mainly know about JPEG XL only because I’m acutely aware of the limitations of standard JPEG for both photography and high resolution scanned documents, where noise and real world messiness cause all sorts of problems. Something like QOI seems ideal for synthetic images, which I don’t work with a lot, and wouldn’t know the limitations of PNG as well.
You say that it is sorted in the order of most significants, so for a date it is more significant if it happend 1024, 2024 or 9024?
Most significant to least significant digit has a strict mathematical definition, that you don’t seem to be following, and applies to all numbers, not just numerical representations of dates.
And most importantly, the YYYY-MM-DD format is extensible into hh:mm:as too, within the same schema, out to the level of precision appropriate for the context. I can identify a specific year when the month doesn’t matter, a specific month when the day doesn’t matter, a specific day when the hour doesn’t matter, and on down to minutes, seconds, and decimal portions of seconds to whatever precision I’d like.
This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.
Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:
But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.
Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.
Your day to day use isn’t everyone else’s. We use times for a lot more than “I wonder what day it is today.” When it comes to recording events, or planning future events, pretty much everyone needs to include the year. Getting things wrong by a single digit is presented exactly in order of significance in YYYY-MM-DD.
And no matter what, the first digit of a two-digit day or two-digit month is still more significant in a mathematical sense, even if you think that you’re more likely to need the day or the month. The 15th of May is only one digit off of the 5th of May, but that first digit in a DD/MM format is more significant in a mathematical sense and less likely to change on a day to day basis.
Functionally speaking, I don’t see this as a significant issue.
JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.
Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.
You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.
It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.
Adobe is backing the format, Apple support is coming along, and there are rumors that Apple is switching from HEIC to JPEG XL as a capture format as early as the iPhone 16 coming out in a few weeks. As soon as we have a full blown workflow that can take images from camera to post processing to publishing in JXL, we might see a pretty strong push for adoption at the user side (browsers, websites, chat programs, social media apps and sites, etc.).
Nah, that’s just anticipating customer rage. When I worked in restaurants I learned very early on that it’s better to put things in a smaller container, and put the overflow into a separate container, rather than try to give them a little extra in the next size container that doesn’t get filled up.
It’s the meme with the kid failing to understand that the amount doesn’t change just because the container changes. Only with angry adults who want their money back.
For my personal devices:
I’ve worked with work systems that used RedHat and Ubuntu back in the late 2000’s, plus decades of work computers with Windows. But I’m no longer in a technical career field so I haven’t kept on top of the latest and greatest.
When they say modules, does that mean mainboards?
They mean each part. Here’s their store for individual parts.
This announcement includes a new display, so anyone with the old display can swap out their old one for the new one. People can swap out batteries. Keyboards. Touchpads. It’s a modular design so that each module can be swapped out if broken, or if there’s been an upgrade the user wants.
Those small USB drives are too slow anyway, often limited to USB 2.0 interfaces or slow flash modules. I’ve switched over to an SSD specifically because of how slow booting and installation is from a standard 10-year-old USB stick.
every distro I’ve tried has a strong sense that if you’re using the GUI you don’t need or deserve admin controls
It’s more that GUI programs can’t be trusted with root privileges. They’re not designed for that, and can break things in unpredictable ways.
Apple, too. The 2012 MacBook Pro had a high DPI display, and everything scales normally even when dragging windows over to non-HiDPI external monitors.
That’s not even getting into the mobile OSes, which have to deal with nonstandard display sizes and resolutions all the time, across multiple settings for accessibility.
By default, Teslas are set in “one pedal driving” mode, which makes it so that the wheels won’t turn without the throttle/accelerator being pressed. That’s a different interface and behavior from the traditional automatic transmission, where simply lifting the foot off the brake pedal allows the vehicle to roll either forward or backward, depending on whether it’s in D or R.
The selection of the “transmission” setting of P R D in a Tesla also doesn’t have tactile feedback that subtly communicates which direction it’s set to.
The combination of the two means that the car is different in these ways and can contribute to mistaken gear selection plus application of the throttle, compared to a typical car.
Who’s in the middle of this Venn Diagram between “uses some kind of custom OS on their phone to where their camera app doesn’t automatically read QR codes” and “doesn’t know how to install or use software that can read QR codes”?