Every political thread is chock full of people being angry and unreasonable. I did some data mining, and most of the hate is coming from a very small percentage of the community, and the rest of the community is very consistent in downvoting them.

The problem is that even with human moderators enforcing a series of rules, most of those people are still in the comments making things miserable. So I made a bot to do it instead.

!santabot@slrpnk.net is a bot that uses an algorithm similar to PageRank to analyze the Lemmy community, and preemptively bans about 1-2% of posters, that consistently get a negative reaction a lot of the time. Take a look at an example of the early results. See how nice that is? It’s just people talking, and when they disagree, they say things like “clearly that part is wrong” and “your additions are good information though.”

It’s too early to tell how well it will work on a larger scale, but I’m hopeful. So, welcome to my experiment. Let’s talk politics without all the abusive people coming into the picture too. Please come in and test if this thing can work in the long run.

Pleasant Politics

!pleasantpolitics@slrpnk.net

  • auk@slrpnk.netOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    I made this system because I, also, was concerned about the macro social implications.

    Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they’re doing more harm than good. Furthermore, when certain things get too unpleasant to deal with on any level anymore, big instances will defederate from each other completely. The macro social implications of that on the community are exactly why I want to try a different model, because that one doesn’t seem very good.

    You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I’ve done to address the concern and indicate that it is a valid concern. Your concern is noted. If you see it censoring any opposing views, please let me know, because I don’t want it to do that either.

    • Madison420@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      You’ve created the lizard lounge from reddit dude, you’re basically limiting a sub to power users and saying it’s a good thing. It’s not.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they’re doing more harm than good.

      You don’t need a social credit tracking system to auto-ban users if there’s a big majority of the community that recognizes the user as problematic: you could manually ban them, or use a ban voting system, or use the bot to flag users that are potentially problematic to assist on manual-ban determinations, or hand out automated warnings… Especially if you’re only looking at 1-2% of problematic users, is that really so many that you can’t review them independently?

      Users behave differently in different communities… Preemptively banning someone for activity in another community is already problematic because it assumes they’d behave in the same way in the other, but now it’s for activity that is ill-defined and aggregated over many hundreds or thousands of comments. There’s a reason why each community has their rules clearly spelled out in the side, it’s because they each have different expectations and users need those expectations spelled out if they have any chance of following them.

      I’m sure your ranking system is genius and perfectly tuned to the type of user you find the most problematic - your data analysis genius is noted. The problem with automated ranking systems isn’t that they’re bad at what they claim to be doing, it’s that they’re undemocratic and dehumanizing and provide little recourse for error, and when applied at large scales those problems become amplified and systemic.

      You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I’ve done to address the concern and indicate that it is a valid concern.

      That isn’t my concern with your implementation, it’s that it limits the ability to defend opposing views when they occur. Consensus views don’t need to be defended against aggressive opposition, because they’re already presumed to be true; a dissenting view will nearly always be met with hostile opposition (especially when it regards a charged political topic), and by penalizing defenses of those positions you allow consensus views remain unopposed. I don’t particularly care to defend my own record, but since you provided them it’s worth pointing out that all of the penalized examples you listed of my user were in response to hostile opposition and character accusations. The positively ranked comments were within the consensus view (like you said), so of course they rank positively. I’m also tickled that one of them was a comment critiquing exactly the kind of arbitrary moderation policies like the one you’re defending now.

      f you see it censoring any opposing views, please let me know, because I don’t want it to do that either.

      Even if I wasn’t on the ban list and could see it I wouldn’t have any interest in critiquing its ban choices because that isn’t the problem I have with it.