Currently, talking to a face is the ultimate guarantee that you are communicating with a human (and on a subconscious level makes you try to relate, empathise, etc.). If humanoid robot technology eventually surpasses the Uncanny Valley, discovering that I’m talking to a humanoid with an LLM and that my intuitions had been betrayed would undermine the instinctive trust I give to the other party when I see a human face. This would degrade my social interactions across the board, because I’d live in constant suspicion that the humans I was talking to weren’t actually human.

It is for this reason I think it should be the law that humanoid robots must be clearly differentiated from humans. Or at least that people should have the right to opt out from encountering realistic-looking humanoids.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 days ago

    This makes me think of the commandment “thou shalt not make a machine in the likeness of a human mind” from the Dune series.

    Seriously, though, I suspect a lot of technologies we currently experience in society only in the context of oppression of average people and widening of the income gap might be able to be put to better use. Not even necessarily because we have rules in place so much as because people won’t be baking their selfish asshole agendas into the tech they build.

    That all kindof assumes that humanoid robots would be “tools” for humans to “use”. If of course they (or at least some of them) are more like sentient creatures with hopes and dreams and emotions, that might make for a much different conversation. And that feels like the kind of conversation that’d be hard to even comment on today.

    • Andy@slrpnk.net
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      11 days ago

      I’ve spent a lot of time thinking about this, because over the last year I was writing the world guide for a solarpunk setting to be used with a tabletop RPG or as a writing guide. And while I was working on this, OpenAI came along and put the Turing test out to pasture.

      Several existential crises later, the result looked remarkably like I hadn’t thought about it at all: in the game setting, there are robots and they are treated like people. Like Bender on Futurama.

      I think @TootSweet@Lemmy.world (love the username, btw!) is absolutely right that our concerns are all largely shaped by the presumption that today, everything someone builds is built to benefit the creator and manipulate the end user. If that isn’t the case, than a convincing android could just be… your neighbor Hassan.

      Most machines probably wouldn’t have a reason to pretend to be human. But if one wanted to, that’s basically transorganicism. No disrespect to OP, but if a machine is sentient, trying to restrict it from presenting as organic seems pretty similar to restrictions on trans people using the restroom that matches their presentation.

      And if they are trying to deceive you maliciously, well… I currently know everyone I meet is organic, and I already know not to trust all of them.