OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave—and revealed that it’s exploring how to ‘responsibly’ generate explicit content.
After experiencing janitor AI and local models I’m certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?
Janitor, like most “free” models, degrades too quickly for my liking. And if I pay I might as well use NovelAI + Sillytavern, since they don’t have any restrictions on their text gen models that could interfere with their generation. Local models I didn’t had much luck with getting them to run and I suspect they’d be pretty slow too.
KoboldAI has models trained on erotica (Erebus and Nerybus). It has the ability to spread layers across multiple GPUs, so as long as one is satisfied with the output text, in theory, it’d be possible to build a very high-powered machine (like, in wattage terms) with something like four RX 4090s and get something like real-time text generation. That’d be like $8k in parallel compute cards.
I’m not sure how many people want to spend $8k on a locally-operated sex chatbot, though. I mean, yes privacy, and yes there are people who do spend that on sex-related paraphernalia, but that’s going to restrict the market an awful lot.
Maybe as software and hardware improve, that will change.
The most obvious way to cut the cost is to do what has been done with computing hardware for decades, like back when people were billed for minutes of computing time on large computers in datacenters – have multiple users of the hardware, and spread costs. Leverage the fact that most people using a sex chatbot are only going to be using the compute hardware a fraction of the time, and then have many people use the thing and spread costs across all of them. If any one user uses the hardware 1% of the time on average, that same hardware cost per user is now $80. I’m pretty sure that there are a lot more people who will pay $80 for use of a sex chatbot than $8000.
After experiencing janitor AI and local models I’m certainly not coming back to character AI, why waste so much time trying to jailbreak a censored model when we have ones that just do as they are told?
Janitor, like most “free” models, degrades too quickly for my liking. And if I pay I might as well use NovelAI + Sillytavern, since they don’t have any restrictions on their text gen models that could interfere with their generation. Local models I didn’t had much luck with getting them to run and I suspect they’d be pretty slow too.
KoboldAI has models trained on erotica (Erebus and Nerybus). It has the ability to spread layers across multiple GPUs, so as long as one is satisfied with the output text, in theory, it’d be possible to build a very high-powered machine (like, in wattage terms) with something like four RX 4090s and get something like real-time text generation. That’d be like $8k in parallel compute cards.
I’m not sure how many people want to spend $8k on a locally-operated sex chatbot, though. I mean, yes privacy, and yes there are people who do spend that on sex-related paraphernalia, but that’s going to restrict the market an awful lot.
Maybe as software and hardware improve, that will change.
The most obvious way to cut the cost is to do what has been done with computing hardware for decades, like back when people were billed for minutes of computing time on large computers in datacenters – have multiple users of the hardware, and spread costs. Leverage the fact that most people using a sex chatbot are only going to be using the compute hardware a fraction of the time, and then have many people use the thing and spread costs across all of them. If any one user uses the hardware 1% of the time on average, that same hardware cost per user is now $80. I’m pretty sure that there are a lot more people who will pay $80 for use of a sex chatbot than $8000.