• spiderman
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        4 months ago

        the point is they could have fixed it by the time it was reported and not waited around until the issue was blown bigger.

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          A security company should prioritize investments (I.e. development time) depending on a threat model and risk management, not based on what random people think.

          • spiderman
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            so are you saying that wasn’t a security risk?

            • sudneo@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              ·
              4 months ago

              I am saying that based on the existing risks, effort should be put on the most relevant ones for the threat model you intend to assume.

              In fact the “fix” that they are providing is not changing much, simply because on single-user machines there is borderline no difference between compromising your user (i.e., physical access, you installing malware unknowingly etc.) and compromising the whole box (with root/admin access).

              On Windows it’s not going to have any impact at all (due to how this API is implemented), on Linux/Mac it adds a little complexity to the exploit. Once your user is compromised, your password (which is what protects the keychain) is going to be compromised very easily via internal phishing (i.e., a fake graphical prompt, a fake sudo prompt etc.) or other techniques. Sometimes it might not be necessary at all. For example, if you run signal-desktop yourself and you own the binary, an attacker with local privileges can simply patch/modify/replace the binary. So then you need other controls, like signing the binary and configuring accepted keys (this is possible and somewhat common on Mac), or something that anyway uses external trust (root user, remote server, etc.).

              So my point is: if their threat model assumed that if your client device was compromised, your data was not protected, it doesn’t make much sense to reduce 10/20% the risk for this to happen, and focus on other work that might be more impactful.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            Privacy is not anonimity though. Privacy simply means that private data is not disclosed or used to parties and for purposes that the data owner doesn’t explicitly allow. Often not collecting data is a way to ensure no misuse (and no compromise, hence security), but it’s not necessarily always the case.

            • Victor@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Privacy simply means that private data is not disclosed or used

              Right, and often for that to be the case, the transferring and storing of data should be secure.

              I’m mostly just pointing out the fact that when you do x ≠ y ≠ z, it can still be the case that x = z, e.g. 4 ≠ 3 ≠ 4.

              Just nitpicking, perhaps.

        • wildbus8979@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          4 months ago

          It’s better now, but for years and years all they used for contact discovery was simple hashing… problem is the dataset is very small, and it was easy to generate a rainbow table of all the phone number hashes in a matter of hours. Then anyone with access to the hosts (either hackers, or the US state via AWS collaboration) had access to the entire social graph.

          • 9tr6gyp3@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Yeah the way I remember it, they put a lot of effort into masking that social graph. That was a while back too, not recent.

            • wildbus8979@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              4 months ago

              What I’m saying though is that for the longest time they didn’t, and when they changed the technique they hardly acknowledge that it was a problem in the past and that essentially every users social graph had been compromised for years.

              • ᗪᗩᗰᑎ@lemmy.ml
                link
                fedilink
                English
                arrow-up
                18
                arrow-down
                2
                ·
                4 months ago

                Signal, originally known as TextSecure, worked entirely over text messages when it first came out. It was borne from a different era and and securing communication data was the only immediate goal because at the time everything was basically viewable by anyone with enough admin rights on basically every platform. Signal helped popularize end-to-end encryption (E2EE) and dragged everyone else with them. Very few services at the time even advertised E2EE, private metadata or social graph privacy.

                As they’ve improved the platform they continue to make incremental changes to enhance security. This is not a flaw, this is how progress is made.