• 1 Post
  • 49 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle
  • I used Windows growing up, switched to Linux in highschool on my personal machines, and was forced to use Mac for nearly 10 years at work. In my experience, they all have problems, and the worst part is always early on. After you’ve used them for a while and have gotten familiar/comfortable, the problems get easier to deal with, and switching back (or on to something new) becomes more daunting/uncomfortable than dealing with what you have. So in that sense, yes, it will get easier.

    Also, as hardware ages, you often see better support (though laptops can be tricky, as they are not standardized).

    Keep in mind, when you use Windows or Mac, you’re using a machine built for that OS and (presumably) supported by the manufacturer for that OS (especially with custom drivers). If you give Linux the same advantage (buy a machine with Linux pre-installed, or with Linux “officially supported”), you’re much more likely to have a similar, stable experience.

    Also, I’ve had better stability with stock Ubuntu than its derivatives (Pop!_OS and Mint). It might be worth trying an upstream distro, to see if you have better stability.


  • Having daily driven Windows (~6 years growing up), MacOS (8+ years for work), Linux (~18 years on personal and (some) work machines), and ChromeOS (~2 years, on a cheap Chromebook used while I was traveling places I didn’t want to take an expensive machine), if my options were Windows, MacOS, or ChromeOS, I would 100% take ChromeOS. Even on cheap hardware, it was a better user experience than the others… Though I will caveat that with: when I had to do work that required heavy lifting, I remoted into my Linux desktop. But that was a hardware limitation, rather than a software limitation.

    For people who know what they’re doing, I recommend traditional Linux. For those who don’t, I recommend ChromeOS. Mac and Windows are both also run by mega corps, they’re all spying on users… at least ChromeOS is performant and stable.





  • Raster images do not need to be rendered - see Rendering:

    Rendering is the process of generating a photorealistic or non-photorealistic image from input data such as 3D models…Today, to “render” commonly means to generate an image or video from a precise description (often created by an artist) using a computer program.

    Note that “render” is a fairly generic term, and it is sometimes used like “render to the screen,” to just mean to display something. Rasterisation may be a better term to use here, since it only applies to vector graphics, and is the part of the process I am referring to.

    In any case, except for possibly reading fewer bytes from disk, the vector case includes all the same compute and memory cost as the raster image - it just has added overhead to compute the bitmap. On modern hardware, this doesn’t take terribly long, but it does mean we’re using more compute just to launch/load things.


  • It’s also worth noting apps have to ship higher resolution assets now, due to higher resolution displays. This can include video, audio, images, etc. Videos and images may be included at multiple resolutions, to account for different sized displays.

    For images, many might assume vectors are the answer, but vectors have to be rendered at runtime, which increases startup time in the best case scenario, and isn’t even always supported on all platforms, meaning they have to be shipped alongside raster assets of a few different sizes, further increasing package bloat. And of course the code grows to add the logic to properly handle all the different asset types and sizes.

    All this (packaging dependencies, plus assets/asset handling) to say it isn’t always malware, ads, electron, etc. Sometimes it’s just trying to make something that looks nice and runs well (enough) on any machine.




  • My first thought was similar - there might be some hardware acceleration happening for the jpgs that isn’t for the other formats, resulting in a CPU bottleneck. A modern harddrive over USB3.0 should be capable of hundreds of megabits to several gigabits per second. It seems unlikely that’s your bottleneck (though you can feel free to share stats and correct the assumption if this is incorrect - if your pngs are in the 40 megabyte range, your 3.5 per second would be pretty taxing).

    If you are seeing only 1 CPU core at 100%, perhaps you could split the video clip, and process multiple clips in parallel?


  • it doesn’t unravel the underlying complexity of what it does… these alternative syntaxes tend to make some easy cases easy, but they have no idea what to do with more complicated cases

    This can be said of any higher-level language, or API. There is always a cost to abstraction. Binary -> Assembly -> C -> Python. As you go up that chain, many things get easier, but some things become impossible. You always have the option to drop down, though, and these regex tools are no different. Software development, sysops, devops, etc are full of compromises like this.


  • You are conflating the concept and the implementation. PFS is a feature of network protocols, and they are a frequently cited example, but they are not part of the definition. From your second link, the definition is:

    Perfect forward secrecy (PFS for short) refers to the property of key-exchange protocols (Key Exchange) by which the exposure of long-term keying material, used in the protocol to authenticate and negotiate session keys, does not compromise the secrecy of session keys established before the exposure.

    And your third link:

    Forward secrecy (FS): a key management scheme ensures forward secrecy if an adversary that corrupts (by a node compromise) a set of keys at some generations j and prior to generation i, where 1 ≤ j < i, is not able to use these keys to compute a usable key at a generation k where k ≥ i.

    Neither of these mention networks, only protocols/schemes, which are concepts. Cryptography exists outside networks, and outside computer science (even if that is where it finds the most use).

    Funnily enough, these two definitions (which I’ll remind you, come from the links you provided) are directly contradictory. The first describes protecting information “before the exposure” (i.e. past messages), while the second says a compromise at j cannot be used to compromise k, where k is strictly greater than j (i.e. a future message). So much for the hard and fast definition from “professional cryptographers.”

    Now, what you’ve described with matrix sounds like it is having a client send old messages to the server, which are then sent to another client. The fact the content is old is irrelevant - the content is sent in new messages, using new sessions, with new keys. This is different from what I described, about a new client downloading old messages (encrypted with the original key) from the server. In any case, both of these scenarios create an attack vector through which an adversary can get all of your old messages, which, whether you believe violates PFS by your chosen definition or not, does defeat its purpose (perhaps you prefer this phrasing to “break” or “breach”).

    This seems to align with what you said in your first response, that Signal’s goal is to “limit privacy leaks,” which I agree with. I’m not sure why we’ve gotten so hung up on semantics.

    I wasn’t going to address this, but since you brought it up twice, running a forum is not much of a credential. Anyone can start a forum. There are forums for vaxxers and forums for antivaxxers, forums for atheists and forums for believers, forums for vegans and forums for carnivores. Not everyone running these forums is an expert, and necessarily, not all of them are “right.” This isn’t to say you don’t have any knowledge of the subject matter, only that running a forum isn’t proof you do.

    If you’d like to reply, you may have the last word.









  • This is not entirely correct. Messages are stored on their servers temporarily (last I saw, for up to 30 days), so that even if your device is offline for a while, you still get all your messages.

    In theory, you could have messages waiting in your queue for device A, when you add device B, but device B will still not get the messages, even though the encrypted message is still on their servers.

    This is because messages are encrypted per device, rather than per user. So if you have a friend who uses a phone and computer, and you also use a phone and computer, the client sending the message encrypts it three times, and sends each encrypted copy to the server. Each client then pulls its copy, and decrypts it. If a device does not exist when the message is encrypted and sent, it is never encrypted for that device, so that new device cannot pull the message down and decrypt it.

    For more details: https://signal.org/docs/specifications/sesame/