Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.
Generating images of black women when a white man was asked for isn’t just silly, it exposes how the outputs of these systems can be covertly directed in specific directions. We just got lucky that they fucked it up badly enough for people to prove it.
You might be fine with an AI asserting at all times and in every case that abortion is acceptable, but if you did I’m sure you’d be quite upset at a competing AI being setup and covertly managed in the same way to assert abortion is always murder, and to manipulate its user into believing so, up to and including any images being requested including abortion is murder protestors, or the subject being splashed in blood or whatever.
That’s one highly charged political point. All of them are on the table with this, especially as people start using it more than a regular search for basic information.
People only think it’s silly and don’t care because the demonstrated and injected bias seems well intentioned this time.
AI’s being directed this way, and specifically covertly in a way we can’t audit or detect, is going to be a massive problem going forward.
(Edit: that’s not even including some really nefarious shit, like repeatedly training it on the Bible and religious texts, directing it to never explicitly mention god or religion, but ensure all its outputs fall in line with religious indoctrination)