• spiderman
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    4 months ago

    the point is they could have fixed it by the time it was reported and not waited around until the issue was blown bigger.

    • sudneo@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      A security company should prioritize investments (I.e. development time) depending on a threat model and risk management, not based on what random people think.

      • spiderman
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        so are you saying that wasn’t a security risk?

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          I am saying that based on the existing risks, effort should be put on the most relevant ones for the threat model you intend to assume.

          In fact the “fix” that they are providing is not changing much, simply because on single-user machines there is borderline no difference between compromising your user (i.e., physical access, you installing malware unknowingly etc.) and compromising the whole box (with root/admin access).

          On Windows it’s not going to have any impact at all (due to how this API is implemented), on Linux/Mac it adds a little complexity to the exploit. Once your user is compromised, your password (which is what protects the keychain) is going to be compromised very easily via internal phishing (i.e., a fake graphical prompt, a fake sudo prompt etc.) or other techniques. Sometimes it might not be necessary at all. For example, if you run signal-desktop yourself and you own the binary, an attacker with local privileges can simply patch/modify/replace the binary. So then you need other controls, like signing the binary and configuring accepted keys (this is possible and somewhat common on Mac), or something that anyway uses external trust (root user, remote server, etc.).

          So my point is: if their threat model assumed that if your client device was compromised, your data was not protected, it doesn’t make much sense to reduce 10/20% the risk for this to happen, and focus on other work that might be more impactful.