Even if it’s just a recommendation on a different group in which to ask the question, I’m curious how Lemmy combats criminal activity and content like human trafficking, smuggling, terrorism, etc?

Is it just a matter of each node bans users when they identify a crime, and/or problematic nodes are defederated if they tolerate it?

And if defederated, does that mean each node has to individually choose to defederate from the one allowing criminal activity?

    • LibertyLizard@slrpnk.net
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Well, at minimum, instance operators could find themselves in legal jeopardy if they do not, depending on their local laws.

      Many people would also make a moral argument for the enforcement of certain laws, but I infer from your comment that you don’t agree with such ideas.

          • Kaboom@reddthat.com
            link
            fedilink
            arrow-up
            2
            arrow-down
            12
            ·
            10 months ago

            For example, gun control often takes the form of “making it unreasonably hard for poor people to arm themselves”

            • Atin@lemmy.world
              link
              fedilink
              arrow-up
              9
              ·
              10 months ago

              Most policies make things unreasonably hard for poor people to do anything.

        • LibertyLizard@slrpnk.net
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          10 months ago

          Regardless of you feel about them, website operators must abide by them in most jurisdictions. And therefore it would be naive for Lemmy’s developers to not at least consider this issue.

          • conciselyverbose@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            10 months ago

            There are reporting features. In most jurisdictions, accepting reports and acting on them is plenty sufficient to meet any legal obligations, and many consider scanning every message unnecessarily invasive.

            I don’t, and literally everything on here is public, so it’s not identical, but look at the response to Apple’s proposed (otherwise privacy preserving) CSAM scanning on cloud photo backups.