• 1 Post
  • 89 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • If you use GnuPG or one of the GUI implementations it does.

    No, because it’s the server that terminates the TLS connection, not the recipient’s client. TLS is purely a security control to protect the transport between you and the server you are talking to. It doesn’t have anything to do with e2ee. It’s still important, of course, but not for e2ee.

    You do realize e2ee merely means that two users share public keys when they communicate in order to decrypt the messages they receive, right?

    And how does TLS between you and your mail server help with this? Does it give you any guarantee that the public key was not tampered when it reached your server? Or instead you use the fingerprint, generally transmitted through another medium to verify that?

    Nothing to stop you from hosting your own on an encrypted drive.

    An encrypted drive is useful only when the server is off against physical attacks. While the server is powered on (which is when it gets breached - not considering physical attacks) the data is still in clear.

    EteSync does E2E already

    And…it requires a specialized client anyway. In fact, they built a DAV bridge (https://github.com/etesync/etesync-dav). Now tell me, if you use this on -say- your phone, can you use other DAV tools without using such bridge? No, because it does something very similar to what Proton does. If proton bridge will get calendar/contacts functionality too (if, because I have no idea how popular of a FR it is), you are in the exact same situation.


  • It doesn’t matter that your private key is stored on their servers encrypted/hased or whatever. If you were simply storing it there, that would not be an issue. The problem is that you’re also logging in and relying on whatever JS is sent to you to only happen client-side.

    I feel like I covered this point? They make the client tool you are using, there is 0 need for them to steal your password to decrypt your key. Of course you are trusting them, you are seeing your unencrypted email in their webpage, where they can run arbitrary code. They do have their clients opensourced, but this doesn’t mean much. You are always exposed to a supply-chain risk for your client software.

    Most users aren’t sending emails from their Proton to other Proton users either.

    So…? The point is, if they do, encryption happen without them having to do anything, hence transparently. That was the point of my argument: my mom can make a proton account and send me an email and benefit from PGP without even knowing what PGP is.

    Furthermore, the users that want encryption seek it out.

    And that’s the whole point of the conversation: these users are techies and a super tiny minority. This way, they made a product that allow mainstream users to have encryption.

    Thunderbird or other mail clients that is open source and their apps are signed or you can reproducibily build from source.

    And this control is worth zilch if they get compromised. This is a control against a MiTM who intercepts your download, it’s not a control if “the maker of Thunderbird” decides to screw you over in the same way that Proton would do by serving malicious JS code. If the threat actor you are considering is a malicious software supplier, you have exactly the same issue. There can be pressures from government agencies, the vendor might decide to go bananas or might get compromised.

    However, once that is built it doesn’t change. With Proton, everytime you visit their site you don’t know for sure that it hasn’t changed unless you’re monitoring the traffic.

    Yes, this is true and it’s the real only difference. I consider it a corner case and something that only affects the time needed to compromise your emails, not the feasibility, but it’s true. I am counting on the other hand on a company who has business interests in not letting that happen and a security team to support that work.

    A government is much more likely to convince Proton to send a single user a custom JS payload, than to modify the source code of Thunderbird in a way that would create an exploit that bypasses firewalls, system sandboxing, etc.

    Maybe…? If government actors are in your threat model, you shouldn’t use email in the first place. Metadata are unencrypted and cannot be encrypted, and there are better tools. That said, government agencies have the resources to target the supply chain for individuals and simply “encourage” software distributors to distribute patched versions of the software. This is also a much better strategy because it’s likely they can just get access to the whole endpoint and maintain easy persistence (while with JS you are in the browser sandbox and potentially system sandbox), potentially allowing to compromise even other tools (say, Signal). So yeah, the likelihood might be higher with JS-based software, but the impact is smaller. Everyone has their own risk appetite and can decide what they are comfortable with, but again, if you are considering the NSA (or equivalent) as your adversaries, don’t use emails.

    You mean their PWA/WebView clients that can still send custom JS at anytime, or their bridge?

    Yes.

    First, explain what you mean by a fat client? GnuPG is not a fat client.

    In computer networking, a rich client (also called heavy, fat or thick client) is a computer (a “client” in client–server network architecture) that typically provides rich functionality independent of the central server.

    What I mean is this: a client that implements quite some functionality besides what the server would require to work. In this case, the client handles key management, encryption, decryption, signature verification etc. all functionalities that the server doesn’t even know they exist. This is normal, because the encryption is done on top of regular email protocols, so they require a lot of logic in the client side.

    Being able to export things is a lot different than being able to use Thunderbird for Calendars, or a different Contacts app on your phone.

    For sure it’s different, I didn’t say it’s the same thing. I am saying that you can migrate away easily if your needs change and you’d rather have interoperability.

    DAV is as secure as the server you run it on and the certificate you use for transport.

    Exactly. Which is why in the very comment you quoted I said:

    There is a security benefit, and the benefit is trusting the client software more than a server, especially if shared.

    Are you trusting your Nextcloud instance (yours of hosted by someone else) not getting pwned/the server being seized/accessed physically/etc. more than you trust Proton not to get pwnd? Then *Dav tools might be for you.


  • Why would anyone be interested in efforts on a platform with a closed-source backend and that is not developer focused?

    Because most people don’t care about those particular things. Almost all the world uses completely proprietary tools (Gmail) that also violate your privacy.

    Not to mention, entirely unnecessary why you should have to use a bridge gateway in the first place with IMAPS & PGP/GPG, CalDav & CardDav. Like I said, Proton is engaged in some questionable practices.

    It’s not unnecessary, it’s the result of a technical choice. A winning technical choice actually. PGP has a negligible user-base, while Proton has already 100 million accounts. I would be surprised if there were 10 million people actually using PGP. They sacrificed the flexibility and composability of tools (which results almost always in complexity) and made an opinionated solution that works well enough for the mainstream population, who has no interest in picking their tools and simply expects a Gmail-like experience.

    And if you really have stringent requirements, they anyway provided the bridge, so that you can have that flexibility if it’s really important for you.

    IMAPS & PGP/GPG, CalDav & CardDav

    • IMAPs is just IMAP on TLS, so it does not have anything to do with e2ee in this context.
    • PGP/GPG is what they use. They just made a tool that is opinionated and just works, rather than one which is more flexible but also more complex. Good choice? Bad choice? It’s a choice.
    • *DAV clients expect cleartext data on the server. If you encrypt the data, you need to build all this logic into the clients, and you are not following the standard anymore, which means you will anyway be bound to your client only (and those which implement compatibility). Proton decided that they want to implement e2ee calendar, and they decided to roll their own thing. It’s up to everyone to decide whether e2ee is a more important feature than interoperability with other tools. I don’t care about interoperability, for example, and I’d take e2ee over that.

  • Proton stores your keys

    Proton stores an encrypted blob.

    All they need now is your decryption password & they can read your messages

    “All they need now is your private key”. It’s literally a secret, they use bcrypt and then encrypt it. Also, “they” are not generally in the threat model. “They” can serve you JS that simply exfiltrates your email, because the emails are displayed in their web-app, they have no need to steal your password to decrypt your key and read your email…

    It isn’t transparent, because most users aren’t running their own frontend locally and tracking all the source code changes.

    Probably we misunderstand what “transparent” means in this context. What I mean is that the average user will not do any PGP operation, in general. Encryption happens transparently for them, which is the whole thing about Proton: make encryption easy and default.

    Now you’re merely trusting them to not send you a custom JS payload to have your decryption password sent to the server.

    Again, as I said before, they control the JS, they can get the decrypted data without getting the password…? You always trust your client tooling. There is always a point where I trust someone, be it the “enigmail” maintainers, Thunderbird maintainers (it has access to messages post-decryption!), the CLI tool of choice etc.

    How many users are actually utilizing their hidden API to ensure that decryption/encryption is only done client-side?

    I mean, their clients are open-source and have also been audited?

    If they have your private key, how many users do you think are using long enough passwords to make cracking their password more challenging?

    I don’t know. But here we are talking about a different risk: someone compromising Proton, getting your encrypted private key, and starting bruteforcing bcrypt-hashed-and-salted passwords. I find that risk acceptable.

    This is just entirely inaccurate and you’ve failed to provide any "proof’ for your generalizations here.

    See other post.

    If you actually understood PGP you’d know you can generate and use local-only keys with IMAPS and have support to use any IMAP client.

    Care to share any practical example/link, and how exactly this means not having a fat client that does the encryption/decryption for you?

    There is no security benefit in their implementation other than to lock you into a walled garden and give you a false sense of security.

    Right, because *DAV protocol are so secure. They all support e2ee, right…? There is a security benefit, and the benefit is trusting the client software more than a server, especially if shared. You can export data and migrate when you want easily, so it’s really a matter of preference.


  • There are certain things that are known facts, there is no need to prove them every time.

    The simple fact that:

    • There is not a standard tool that is common
    • The amount of people who use PGP is ridiculously low, including within tech circles. Just to make one example, even a famous cryptographer such as Filippo Sottile mentions to receive maybe a couple of PGP encrypted emails a year. I work in security and I have never received one, nobody among my colleagues has a public key to use, and I have never seen anybody who was not a tech professional use PGP.

    You can also see:

    We can’t say this any better than Ted Unangst: “There was a PGP usability study conducted a few years ago where a group of technical people were placed in a room with a computer and asked to set up PGP. Two hours later, they were never seen or heard from again.” If you’d like empirical data of your own to back this up, here’s an experiment you can run: find an immigration lawyer and talk them through the process of getting Signal working on their phone. You probably don’t suddenly smell burning toast. Now try doing that with PGP.

    A recent talk, I will quote the preamble:

    Although OpenPGP is widely considered hard to use, overcomplicated, and the stuff of nerds, our prior experience working on another OpenPGP implementation suggested that the OpenPGP standard is actually pretty good, but the tooling needs improvement.

    And you can find as many opinion pieces as you want, by just searching (for example: https://nullprogram.com/blog/2017/03/12/).

    However, if you really believe I am wrong, and you disagree that PGP tooling is widely considered bad, complex and almost a meme in the security community, you are welcome to show where I am wrong. Show me a simple PGP setup that non-technical people use.

    P.s.

    I also found https://arxiv.org/pdf/1510.08555.pdf, an interesting paper which is a followup of another paper 10 years older about usability of PGP tools.




  • It’s actually fairly simple: if the server never has access to the keys or the plaintext of messages (or calendar events, etc.), then you need a client tool to handle decryption and encryption operations.

    They use PGP, and they have implemented this feature in a way that it’s completely transparent to the user to make it mainstream. So they chose building dedicated tools (bridge, web client), rather than letting users use their own tools, because the PGP tooling sucks hard and it’s extremely inaccessible for the general population.

    This means that you need a fat client, whatever you do, or otherwise the server will have access to the data and there is no e2ee. Instead of using enigmail or other PGP plugins/tools, they built the bridge.





  • I struggled with this for a long time, and then I just decided to use synology photos.

    It has albums, tagging, geolocation, sharing. It has phone picture backup, it is inherently a backup as it’s on my NAS and I back that data up again.

    I want to keep the thing that I really care about the most friction free and also not too dependent on myself so that I can still experiment.

    I didn’t try PiGallery2 though, maybe I will have a look!




  • sudneo@lemmy.worldtoSelfhosted@lemmy.worldDocker or podman?
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I really thought swarm was dead :)

    To be honest, some kubernetes distributions make the cluster operations minimal (I use k0s managed via ansible)!

    Either way, the moment you go from N containers on one box to N containers on M boxes you need to start considering how to handle stateful applications, load balancing, etc. And that in general requires knowledge on a domain which is different from having simply applications wrapped in containers locally.


  • Yeah ultimately every container has it’s own veth interface, so you can do shaping using tc on those.

    Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.

    That’s a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.


  • Cgroups have the ability to limit TCP and total network bandwidth. I don’t know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.

    You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.

    I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.




  • sudneo@lemmy.worldtoSelfhosted@lemmy.worldDocker or podman?
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 months ago

    You have a bunch of options:

    kubectl run $NAME --image=$IMAGE
    

    this just creates a pod running the specific image. If you kill the pod, or it terminates, it won’t be run again. In general though, you probably want to do some customization before running (maybe you need volumes, secrets, env, ports, labels, securityContext, etc.) and for that you can simply let kubectl generate the boilerplate YAML and then simply make some edit:

    kubectl run $NAME --image=$IMAGE --dry-run=client -o yaml > mypod.yaml
    # edit mypod.yaml
    kubectl create -f mypod.yaml
    

    You can do the same with a deployment or statefulset:

    kubectl create deployment $NAME -n $NAMESPACE [...] --dry-run=client -o yaml > deployment.yaml
    

    In case you don’t need anything fancy, the kubectl create subcommand allows you to create simple workload, so probably that’s the answer to your question.