ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?

  • solstice@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    You’re the first person I’ve ever heard say that morals and ethics in AI is bad. How can you possibly say that? I’ll hear your response before challenging it, beyond my initial skepticism of course.

    • 👁️👄👁️@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      It’s a tool that’s not going anywhere. We have to adapt, there is no other choice. Ethics will not stop bad guys from doing bad things. It will stop normal people from doing things because it doesn’t fit what corporations deem acceptable. Competition is banned because other corporations deem them unethical by their standards.

      Did you weigh in on, or ever see a public vote and what OpenAI determined their AI is allowed to do? Is what you deem ethical in line with that advertisers deem ethical? Are people allowed to have unethical questions?

      Again, my point with open source as well. Why would they allow open-source alternatives exist if they can ban them preemptively in the name of ethics, because anyone can inevitably modify the model to be uncensored? (already happens)

      “Ethics” become this ambiguous thing that can be used to stomp out competition and not have to justify their changes. Maybe you’re concerned about someone asking an LLM how to create a bomb. The LLM shouldn’t answer because it shouldn’t have that information in the first place, which is on the topic of data scraping. A lot of the dangerous stuff that could be generated is because this stuff is public and got scraped. It’s already out there.

      You can already have the LLM not tell people to kill themselves without forcing ethics into it by steering it the right direction. This even exist in the already existing uncensored models so it’s clearly not a censorship issue. Maybe this is a moral thing, and my original comment should have omiited morals and just said ethics.

      “Ethics” is a very ambiguous topic. I challenge you to think specifically what are things that should be banned in the name of ethics? Saying ethics in AI is not good does not imply AI should be unethical (looking at you DAN lol). What specific things should be banned that are not from the result of inappropriate data scraping, and if so is that an ethics problem, or because unfettered data scraping unconsentually collecting obscene information it shouldn’t have in the first place?

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        You raise some great insights. As this tech becomes available to humanity, we cannot rely on the bias of one company to keep us safe. That doesn’t mean “ethics in AI” is a mistake, though. (But that is an attention-grabbing phrase!). I believe you neglect what ethics fundamentally is: the way humans navigate one another. It’s how we think and breathe. Ethics are core to our very existence, and not something that you can just pretend doesn’t exist. Even saying nothing is a kind of response.

        What all this means is that if we are designing technology that can teach anyone how to kill in ways they wouldn’t otherwise have been able to, we have to address the realities of that conversation. That’s a conversation that cannot be had just internally in one company, and I think we see eye to eye on that. But saying nothing?

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Maybe ethics is a bit more complicated for this discussion, but it makes me think how do uncensored LLMs still have ethics, yet remain uncensored? Maybe there’s a fine line somewhere. I can agree that it should be steered till more positive things, like saying murder and suicide is bad. The description of that model I linked says it’s still influenced by ethics, but has the guardrails turned off, and maybe that would be a better idea then what I initially said.

          Should custom models be allowed to be run or modified? Should these things be open source? I don’t know the answer to all these questions, but I’ll always advocate for foss and custom models, as I fundamentally see it as a tool that should be allowed to be owned. Which that is at odds with restrictive ethics rhetorics I hear.

          But your second point that it shouldn’t be taught to kill. I think that argument could be used to ban violent video games. You won’t do very good in Overwatch or Valorant if you don’t know how to kill after all. To learn how to hide a dead body, how much more detailed can you get then just turning on the TV and watching Criminal Minds? Our entertainment has zero issue teaching how to kill, encouraging violence (gotta rank up somehow), or hide dead body. Is an AI describing what this media already shows in text form so much worse?

          Side note: that hyperlink I added links to the 33b uncensored WizardLM model which is pretty fun to play around with if you haven’t already tried. Also GPT4All is a cool way to run various local models offline on your computer.

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            But your second point that it shouldn’t be taught to kill.

            Whoa hold up. that’s not what I said at all! I said if it is going to exist, what do we do about it?

            My point is that this ethical conversation is already happening, we cannot change that. The issue is that OpenAI dominates the conversation. The solution cannot be “pretend there’s nothing to talk about”.

    • HandwovenConsensus@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 year ago

      Well, I’ll be the second. Like all tools, generative AI is going to be used for good and evil purposes. Frankly, I’m not comfortable with a large corporation deciding what is and isn’t ethical for all of humanity. Ideally, it would do what the user asked it for, like all other tools, and society would work to control the bad actors, not OpenAI. Any AI doomsday scenario you can picture gets worst when one party has complete control over the AI technology.

      I think it’s important that we support unrestricted open source AI, just as it’s important we support federated social media like lemmy.

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        So how can we navigate ethical concerns that arise in society from open source AI? It seems what you’re advocating for is for no one to answer this question, but that doesn’t make the question go away.

        • HandwovenConsensus@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You say that as if the ethical concerns of AI kept tightly under control by a single organization aren’t infinitely greater. That is no solution at all to any ethical concerns arising from AI.

          Competition and open source is how we navigate it. Ensuring that the power is shared, not monopolized by the few.

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            You say that as if the ethical concerns of AI kept tightly under control by a single organization aren’t infinitely greater.

            It’s unfortunate that it came out that way, because that is not at all what I’m saying. I agree on the problem. Unfortunately, agreeing on problems is rarely enough. I don’t agree with what seems to be your proposed solution: to forget ethics entirely. Though maybe I’m misreading you too!

            • HandwovenConsensus@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I apologize for misunderstanding you.

              I guess it would help if we clarified what ethical issues specifically are we talking about? If you tell me what scenario you are concerned with trying to prevent, I will gladly share my thoughts on it.

      • solstice@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        AGI isn’t just a tool though, it’s theoretically an intelligent entity that could have its own agenda. Armed with intelligence far superior to any human, this is a potential threat. Should we not tightly control it? I know chat gpt is FAR from achieving AGI, but ethics are definitely something that will need to be addressed as the tech develops.

        • akim@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If AGI is an intelligent entity far superior to humans, you can but control it. It is far more intelligent than us and instead it will control us

          Given what humankind did to itself and it surroundings maybe this is a good thing.

          • solstice@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            No disagreement on the last bit. Part of me thinks humanity deserves to be selected for extinction, and our legacy will be artificial life destined to seed the galaxy with its own progeny. Seems like a fitting end doesn’t it?