no it’s not better. It’s extremely invasive as you have to fingerprint and store users fingerprint on your servers indefinitely. Not only that but all of this can be avoided by anyone with half a brain cell. Lemmy should not waste their resources on something like this, it’s extremely hard to do to the point where literally nobody has a good system even giants like Linkedin. Source, I work in bot detection.
Lemmy would never get this right no matter how many people contributed and would just cause overal harm to the platform through privacy invasion and false positives.
Lemmy has quite a few unfortunately invasive qualities of its own, including generally needing an email address from you (Reddit does not), having poor privacy and data retention practices, and generally being very messy with who gets to decide what happens with your data and how easily it can be scraped.
Sure, Reddit sells it… But Lemmy gives it to any web scraper for free.
Which is good. You either have an open system or a closed one. There’s no in-between.
If you want to have advantages of public free decentralized network you can’t obfuscate and centralize bits and pieces of it. Also, it’s 2024, we need to stop this misinformation that email address is supposed to be private. What is private is email address association with the owner and Lemmy doesn’t leak or infringe on. The address is literally called address because it’s supposed to be public.
…And attitudes like this towards privacy will keep Lemmy from progressing to a point where those issues will be fixed.
I have a fundamental problem with giant corporations scraping user data without user consent. That’s a system-level issue. It doesn’t become “good” just because they get to scrape without consent for free.
Nah it has nothing to do with attitude but with practicality. This would mean people’s fingerprints need to be public and shared between servers or some other hack. It’s just possible in any safety and its not really a hill worth dying on. Do we really care about users dodging subreddit bans that much? Its silly.
I agree with your first few points but I’m unsure about the scraping. This is a public forum, what could be done to mitigate scraping that wouldn’t take away form that?
If we take “unlimited unauthenticated API access shouldn’t be possible” for granted, I’m unfortunately not all that technically competent about what can be done next.
The first thing that comes to mind is treating website access and app access differently, maybe limiting app API access by default for people who haven’t logged in.
Or creating a separate bot API that’s rolled out across all servers at some point in the future… And I know federation could pose some serious chokepoints here so that’s where my speculation ends.
Instances can enable or disable the email verification and other measures, like asking why you want to join that instance.
I don’t recall reddit being so liberal. I haven’t used an account I didn’t verified in the same day, so I can’t say if it works, but I suspect they can enable different protocols for inspecting unverified accs.
As a side-note to that discussion: my VPN works with most services i can’t access otherwise while reddit blocked me as I tried to access it to see for myself. I’m surprised.
Keeping a list of “fingerprints” of users is hardly invasive, and it’s only dangerous without proper database security.
It can throw up false positives, but the key there is to make it as good at not doing that as possible, and having a reasonable means for users who feel like they were unfairly tagged as evaders to appeal the flag.
Also, don’t do it automatically, use it as a tool to identify possible cases and have a review team check for which ones need the most immediate action, with help from a separate algorithm that prioritizes user reports by how reliably a users’ reports have pinged actionable content.
That’s the entire game of security, not being perfect, but being good enough for the adversary to decide you might as well be perfect for all their efforts would be worth, and ban evasion protection and bot prevention are no different.
That’s the entire game of security, not being perfect, but being good enough
Yes and good enough is so hard to reach that this is no way accomplished with Lemmys volunteer resources. We literally have full time people and massive AI driven systems doing this professionally. This is no way achievable in Lemmy if centralized Reddit with multi-million dollar budgets can’t even get close to “good enough”.
TBF Reddit isn’t exactly trying all that hard since ban evaders tend to be good for engagement metrics. Like half the measures they do employ they only do because they feel like they have to in order to not look like they just blatantly don’t give a shit so long as the investor watched metrics keep going up.
Lemmy has a system whereby admins talk to each other and share details of ban evaders, but different instances decide what is a bannable offence and not all of the 1000+ instances are involved.
Reddit does have a system to fight it.
Capable or not, bad solution is better than no solution.
no it’s not better. It’s extremely invasive as you have to fingerprint and store users fingerprint on your servers indefinitely. Not only that but all of this can be avoided by anyone with half a brain cell. Lemmy should not waste their resources on something like this, it’s extremely hard to do to the point where literally nobody has a good system even giants like Linkedin. Source, I work in bot detection.
Lemmy would never get this right no matter how many people contributed and would just cause overal harm to the platform through privacy invasion and false positives.
Lemmy has quite a few unfortunately invasive qualities of its own, including generally needing an email address from you (Reddit does not), having poor privacy and data retention practices, and generally being very messy with who gets to decide what happens with your data and how easily it can be scraped.
Sure, Reddit sells it… But Lemmy gives it to any web scraper for free.
Which is good. You either have an open system or a closed one. There’s no in-between.
If you want to have advantages of public free decentralized network you can’t obfuscate and centralize bits and pieces of it. Also, it’s 2024, we need to stop this misinformation that email address is supposed to be private. What is private is email address association with the owner and Lemmy doesn’t leak or infringe on. The address is literally called address because it’s supposed to be public.
…And attitudes like this towards privacy will keep Lemmy from progressing to a point where those issues will be fixed.
I have a fundamental problem with giant corporations scraping user data without user consent. That’s a system-level issue. It doesn’t become “good” just because they get to scrape without consent for free.
Nah it has nothing to do with attitude but with practicality. This would mean people’s fingerprints need to be public and shared between servers or some other hack. It’s just possible in any safety and its not really a hill worth dying on. Do we really care about users dodging subreddit bans that much? Its silly.
What would a “fix” look like in your eyes? Do you have na implementation in mind?
I have a few suggestions for development concerns off the top of my head:
* either immediately or, to prevent spam, after some time
I agree with your first few points but I’m unsure about the scraping. This is a public forum, what could be done to mitigate scraping that wouldn’t take away form that?
If we take “unlimited unauthenticated API access shouldn’t be possible” for granted, I’m unfortunately not all that technically competent about what can be done next.
The first thing that comes to mind is treating website access and app access differently, maybe limiting app API access by default for people who haven’t logged in.
Or creating a separate bot API that’s rolled out across all servers at some point in the future… And I know federation could pose some serious chokepoints here so that’s where my speculation ends.
Instances can enable or disable the email verification and other measures, like asking why you want to join that instance.
I don’t recall reddit being so liberal. I haven’t used an account I didn’t verified in the same day, so I can’t say if it works, but I suspect they can enable different protocols for inspecting unverified accs.
As a side-note to that discussion: my VPN works with most services i can’t access otherwise while reddit blocked me as I tried to access it to see for myself. I’m surprised.
Keeping a list of “fingerprints” of users is hardly invasive, and it’s only dangerous without proper database security.
It can throw up false positives, but the key there is to make it as good at not doing that as possible, and having a reasonable means for users who feel like they were unfairly tagged as evaders to appeal the flag.
Also, don’t do it automatically, use it as a tool to identify possible cases and have a review team check for which ones need the most immediate action, with help from a separate algorithm that prioritizes user reports by how reliably a users’ reports have pinged actionable content.
That’s the entire game of security, not being perfect, but being good enough for the adversary to decide you might as well be perfect for all their efforts would be worth, and ban evasion protection and bot prevention are no different.
Yes and good enough is so hard to reach that this is no way accomplished with Lemmys volunteer resources. We literally have full time people and massive AI driven systems doing this professionally. This is no way achievable in Lemmy if centralized Reddit with multi-million dollar budgets can’t even get close to “good enough”.
TBF Reddit isn’t exactly trying all that hard since ban evaders tend to be good for engagement metrics. Like half the measures they do employ they only do because they feel like they have to in order to not look like they just blatantly don’t give a shit so long as the investor watched metrics keep going up.
Lemmy has a system whereby admins talk to each other and share details of ban evaders, but different instances decide what is a bannable offence and not all of the 1000+ instances are involved.