• 0 Posts
  • 56 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle





  • That’s a fundamental misunderstanding of how diffusion models work. These models extract concepts and can effortlessly combine them to new images.

    If it learns woman + crown = queen

    and queen - woman + man = king

    it is able to combine any such concept together

    As Stability has noted. any model that has the concept of naked and the concept of child in it can be used like this. They tried to remove naked for Stable Diffusion 2 and nobody used it.

    Nobody trained these models on CSAM and the problem is a dilemma in the same way a knife is a dilemma. We all know a malicious person can use a knife for murder, including of children Yet society has decided that knives sufficient other uses that we still allow their sale pretty much everywhere.




  • Game industry professional here: We know Riccitello. He presided over EA at critical transition periods and failed them. Under his tenure, Steam won total supremacy because he was trying to shift people to pay per install / slide your credit card to reload your gun. Yes his predecessor jumped the shark by publishing the Orange Box, but Riccitellos greed sealed the total failure of the largest company to deal with digital distribution by ignoring that gamers loved collecting boxes (something Valve understood and eventually turned into the massive Sale business where people buy many more games than they consume)

    He presided over EA earlier than that too, and failed.

    Both of times, he ended up getting sacked after the stock reached a record low. But personally he made out like a bandit selling EA his own investment in VG Holdings (Bioware/Pandemic) after becoming their CEO.

    He’s the kind of CEO a board of directors would appoint to loot a company.

    At unity, he invested into ads heavily and gambled on being able to become another landlord. He also probably paid good money on reputation management (search for Riccitello or even his full name on google and marvel at the results) after certain accusations were made.




  • None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.

    It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.

    The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.





  • I think at this point we are arguing belief.

    I actually work with this stuff daily and there is a number of 30B models that are exceeding chatGPT for specific tasks such as coding or content generation, especially when enhanced with a lora.

    airoboros-33b1gpt4-1.4.SuperHOT-8k for example comfortably outputs > 10 tokens/s on a 3090 and beats GPT-3.5 on writing stories, probably because it’s uncensored. It’s also got 8k context instead of 4.

    Several recent LLama 2 based models exceed chatgpt on coding and classification tasks and are approaching GPT4 territory. Google bard has already been clobbered into a pulp.

    The speed of advances is stunning.

    M- architecture macs can run large LLMs via llama.cpp because of unified memory interface - in fact a recent macbook air with 64GB can comfortably run most models just fine. Even notebook AMD GPUs with shared memory have started running generative AI in the last week.

    You can follow along at chat.lmsys.org. Open source LLMs are only a few months but have started encroaching on the proprietary leaders who have years of headstart




  • you are answering a question with a different question. LLMs don’t make pictures of your mom. And this particular question?. One that has roughly existed since Photoshop existed.

    It just gets easier every year. It was already easy. You could already pay someone 15 bucks on Fiver to do all of that, for years now.

    Nothing really new here.

    The technology is also easy. Matrix math. About as easy to ban as mp3 downloads. Never stopped anyone. It’s progress. You are a medieval knight asking to put gunpowder back into the box, but it’s clear it cannot be put back - it is already illegal to make non consensual imagery just as it is illegal to copy books. And yet printers exist and photocopiers exist.

    Let me be very clear - accepting the reality that the technology is out there, it’s basic, easy to replicate and on a million computers now is not disrespectful to victims of no consensual imagery.

    You may not want to hear it, but just like with encryption, the only other choice society has is full surveillance of every computer to prevent people from doing “bad things”. everything you complain about is already illegal and has already been possible - it just gets cheaper every year. What you want to have protection from is technological progress because society sucks at dealing with the consequences of it.

    To be perfectly blunt, you don’t need to train any generative AI model for powerful deepfakes. You can use technology like Roop and Controlnet to synthesize any face on any image from a singe photograph. Training not necessary.

    When you look at it that way, what point is there to try to legislate training with these arguments? None.