Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)

  • 0 Posts
  • 69 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle


  • Can’t you always attempt uploads until they bypass arbitrary filters and then report-snipe on that?
    How would a content-based filter prevent this if the malicious actor simply needs to upload correspondingly more images?

    I think the sad reality is that the only escape here is scale. Once you have been hit by this attack and been cleared by the 3rd parties, you’d have precedent for when this happens again and should hopefully be placed in a special bin for better treatment.
    Scale means you will be fire-tested, and are more likely to receive sane treatment instead of the ai-support special.


  • Was about to say this.

    I saw a small-time project using hashed phone numbers and emails a while ago, where assume stupidity instead of malice was a viable explanation.

    In this case however, Plex is large enough and has to care about securiry enough that they either
    did this on purpose to make it sound better, as a marketing move,
    did not show this to their security experts,
    or chose to ignore concerns by those experts and likely others (turning it into the first option basically)

    There is no option where someone did not either knowingly do or provoke this.


  • It isn’t usually. If it was, the server-side function wouldn’t need a constant runtime at different-length inputs since the inputs would not have differing lengths.

    The problem with client-side hashing is that it is very slow (client-side code is javascript (for the forseeable future unless compatibility is sacrificed)), unpredictable (many different browsers with differing feature-sets and bugs), and timing-based attacks could also be performed in the client by say a compromised browser-addon.

    For transit a lot of packaging steps will round off transfer-sizes anyhow, you typically generate constant physical activity up to around 1kB. Ethernet MTU sits at ~1500 bytes for example, so a packet of 200 bytes with a 64 char password or a packet of 1400 bytes with a 1024 char password containing some emoji will time exactly identically in your local network.




  • You can easily get the hash of whole files, there is no input size constraint with most hashing functions.
    Special password hashing implementations do have a limit to guarantee constant runtime, as there the algorithm always takes as long as the worst-case longest input. The standard modern password hashing function (bcrypt) only considers the first 72 characters for that reason, though that cutoff is arbitrary and could easily be increased, and in some implementations is. Having differences past the 72nd character makes passwords receive the same hash there, so you could arbitrarily change the password on every login until the page updates their hashes to a longer password hashing function, at which point the password used at next login after the change will be locked in.





  • I have that exact setup working. qbittorrent (and -nox) are a lot more involved to set up with I2P, but there is some material on how and once you get it running it works quite well at this point.

    I don’t use docker for it, but that should work too. For browsing I use a maintained fork of proxy switchy omega, which allows to choose a proxy profile based on the url, making it easy to pipe i2p pages into the i2pd socks port (I use I2Pd not I2P, don’t think it matters much). qbittorrent can be configured in the same way to statically use the the local (4447 on i2pd) port as a proxy to prevent any clearnet communication. In addition it needs the dedicated I2P host 127.0.0.1 and port 7656 (the sam bridge, giving deeper access to I2P).

    Don’t expect to do anything on the clearnet over I2P, the exits are not good and it’s not what I2P is meant for. For that reason don’t set I2P up as something like a system proxy/vpn, instead pipe the specific programs you want using I2P into the proxy ports using their proxy settings.

    To get rid of the firewalled status in the I2P daemon, you will need to forward ports. Maybe you have seen advice for servers that are not behind a firewall and nat, so that effectively have all ports “forwarded” already. The mythical dedicated IPv4 address.
    In your case you need to pick the port your I2P daemon uses for host to host communication randomly, then forward both TCP and UDP for it on IPv4. Also make sure you even can forward ports, depending on region ISPs no longer hand out dedicated IPv4 even per router, so you might have to specifically ask your ISP for one (I had to). But that is all generic hosting, if you can set up a minecraft server you can make I2P have full connectivity.






  • Probably only sucessful ones.
    Google captchas have had multiple rounds (with it faking you out claiming you failed) for probably a decade. Every round of the game updates some confidence score which if you get it high enough lets you pass.
    This conversely means there is no way to fail, you just get stuck in an infinite loop of challenges if your score doesn’t get high enough.

    The only other alternative means of pricing it would see even valid users consume way more than one “verification” per actual completed captcha, since so many users have low enough scores to need multiple rounds of captcha even when completing them with perfect accuracy.
    I doubt they do this, but if they do it’s a scandal waiting to happen, besides also being very weird for any kind of statistic google certainly offers for their captcha.