• 9 Posts
  • 117 Comments
Joined 3 years ago
cake
Cake day: April 1st, 2022

help-circle



  • Haha they thought it was too easy and were proven wrong!

    Honestly, if a place is obscure enough, even smaller barriers of entry help, like forums that don’t let you post on important boards until you build a reputation. There’s only so much effort an adversary is willing to put in, and if there isn’t a financial incentive or huge political incentive, that barrier could be low.




  • (edit: I accidentally a word and didn’t realize you wrote ‘auto-report instead of deleting them’. Read the following with a grain of salt)

    I’ve played (briefly) with automated moderation bots on forums, and the main thing stopping me from going much past known-bad profiles (e.g. visited the site from a literal spamlist) is not just false positives but malicious abuse. I wanted to add a feature which would censor an image immediately with a warning if it was reported for (say) porn, shock imagery or other extreme content, but if a user noticed this, they could falsely report content to censor it until a staff member dismisses the report.

    Could an external brigade of trolls get legitimate users banned or their posts hidden just by gaming your bot? That’s a serious issue which could make real users have their work deleted, and in my experience, users can take that very personally.





  • I didn’t even think of dual cards, because I have an old & budget motherboard with one slot. But 2 x 16GB GPUs and a new motherboard (and if necessary, new CPU) and PSU and it might even still be cheaper than a 24GB NVIDIA for me. Of course I’d have to explore the trade-offs in detail because I’ve never looked into how dual cards work.

    (but truth be told, I just as easily could settle for a 1x16 GB if I’m confident it would be able to train, even if slowly, AuraFlow or FLEX LoRas for the upcoming Pony v7 model. It’s just a hobby.)









  • Good call-out. My (naïve) understanding is that tools like tiling VAE to handle low VRAM, and lowing steps in the more stable of the samplers, are going to have a generally negative impact on the result, and a very similar image with better detail could be remade using similar variables on better hardware. Maybe that’s a bit idealistic. Like you said, the seed mode usually changes images with size. (You said ‘usually’, is there a way to minimize this?)

    edit: I’m aware ‘better’ and ‘higher quality’ are vague and even subjective terms. But I’m trying to convey something beyond merely higher resolution.




  • Also consider not having an economy where our jobs dominate our lives.

    There’s plenty of studies, videos and anecdotes discussing how despite technology becoming more and more efficient, we work more hours a day in the Industrial era. Most of the older culture we consider traditional didn’t come from the media industries we see today, they came from families and communities having enough time to spend together that they can create and share art and other media relevant to their own lives.


  • (although given the decentralised framework of the fedi, I’m not sure how that could even happen in the traditional sense).

    It’s possible to dominate and softly-control a decentralized network, because it can centralize. So long as the average user doesn’t really care about those ideals (perhaps they’re only here for certain content, or to avoid a certain drawback of another platform) then they may not bother to decentralize. So long as a very popular instance doesn’t do anything so bad that regular users on their instance will leave at once and lose critical mass, they can gradually enshittify and enforce conditions on instances connecting to them, or even just defederate altogether and become a central platform.

    For a relevant but obviously different case study: before the reddit API exodus, there was a troll who would post shock images every day to try and attack lemmy.ml. Whenever an account was banned, they would simply register a new one on an instance which didn’t require accounts to be approved, and continue trolling with barely any effort. Because of this, lemmy.ml began to defederate with any instance which didn’t have a registration approval system, telling them they would be re-added once a signup test was enabled.

    lemmy.ml was one of the core instances, only rivaled in size by lemmygrad.ml and wolfballs (wolfballs was defederated by most other instance, and lemmygrad.ml by many other big instances), so if an instance wasn’t able to federate with lemmy.ml, at the time, it would miss out on most of the activity. So, lemmy.ml effectively pressured a policy change on other instances, albeit an overall beneficial change to make trolling harder, and in their own self-defence. One could imagine how a malevolent large instance could do something similar, if they grew to dominate the network. And this is the kind of EEE fears many here have over Threads and other attempts at moving large (anti-)social networks into the Fediverse.