cross-posted from: https://beehaw.org/post/6795142
Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
Isn’t this bound to happen without built in automated tools for flagging and moderation. Not quite sure how the federation handles this sort of thing besides community modding, saying something if you see something.
Very sensationalist head line.
If you read the paper, it is mostly that one well known Japanese instance that according to Japanese laws is mostly legal.
Where did you find the actual study? The link in the above article leads to https://purl.stanford.edu/vb515nd6874 which has an abstract, but I can’t see the study.
It links to a PDF with the full study.
In just two days, researchers found 112 instances of known CSAM across 325,000 posts
“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,”
In the whole history of this group they have found less than 112 pieces of CSAM? It’s Stanford University. Why not drop in on a few of Jeffery Epstein’s friends and fans. They can tell you were to look.
Yeah literally. What a propaganda piece. Now do twitter, or Facebook, or Instagram. Except due to the walled garden effect of those platforms, the dangerous material probably isn’t viewable by just anyone. That doesn’t mean it’s not there though.
I don’t think it’s a propaganda piece as it’s even bringing up ideas on how to do moderation better in the Fediverse, it seems to me to be a bit too constructive to just call it propaganda and move on.
Or Reddit. You know, the website where a community dedicated to sharing CSAM was one of the biggest on the site and its lead moderator was a sitewide celebrity (oh, and Reddit’s current top admin was also a moderator on that community).
Fortunately it’s all on Japanese instances that many instances like Mastodon.social defederate from
Going by the blurb posted, not the link. How are the demanding more robust moderation and reporting tools when obviously reporting something even took down the instance in question?
Who was the sponsor of this research, Zuck and Musk?
This is something I have worried about for a while. The core concept of the fediverse makes stuff like this really easy to do and there’s not really a solution. I guess government agencies just need to be on the lookout for it?