In the whirlwind of technological advancements, artificial intelligence (AI) often becomes the scapegoat for broader societal issues. It’s an easy target, a non-human entity that we can blame for job displacement, privacy concerns, and even ethical dilemmas. However, this perspective is not only simplistic but also misdirected.

The crux of the matter isn’t AI itself, but the economic system under which it operates - capitalism. It’s capitalism that dictates the motives behind AI development and deployment. Under this system, AI is primarily used to maximize profits, often at the expense of the workforce and ethical considerations. This profit-driven motive can lead to job losses as companies seek to cut costs, and it can prioritize corporate interests over privacy and fairness.

So, why should we shift our anger from AI to capitalism? Because AI, as a tool, has immense potential to improve lives, solve complex problems, and create new opportunities. It’s the framework of capitalism, with its inherent drive for profit over people, that often warps these potentials into societal challenges.

By focusing our frustrations on capitalism, we advocate for a change in the system that governs AI’s application. We open up a dialogue about how we can harness AI ethically and equitably, ensuring that its benefits are widely distributed rather than concentrated in the hands of a few. We can push for regulations that protect workers, maintain privacy, and ensure AI is used for the public good.

In conclusion, AI is not the enemy; unchecked capitalism is. It’s time we recognize that our anger should not be at the technology that could pave the way for a better future, but at the economic system that shapes how this technology is used.

  • kometes@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    17
    ·
    4 months ago

    Maybe work on proving “AI” is actually a technological advancement instead of an overhyped plagiarism machine first.

    • Lmaydev@programming.dev
      link
      fedilink
      arrow-up
      13
      arrow-down
      5
      ·
      edit-2
      4 months ago

      LLMs real power isn’t generating fresh content it’s their ability to understand language.

      Using one to summarise articles gives incredibly good results.

      I use Bing enterprise everyday at work as a programmer. It makes information gathering and learning so much easier.

      It’s decent at writing code but that’s not the main selling point in my opinion.

      Plus they are general models to show the capabilities. Once the tech is more advanced you can train models for specific purposes.

      It seems obvious an AI that can do creative writing and coding wouldn’t be as good at either.

      These are generation 0. There’ll be a lot of advances coming.

      Also LLMs are a very specific type of machine learning and any advances will help the rest of the field. AI is already widely used in many fields.

      • throwwyacc@lemmynsfw.com
        link
        fedilink
        arrow-up
        7
        arrow-down
        3
        ·
        4 months ago

        LLMs don’t “understand” anything. They’re just very good at making it look like they sort of do

        They also tend to have difficulty giving the answer “I don’t know” and will confidently assert something completely incorrect

        And this is not generation 0. The field of AI has been around for a long time. It’s just now becoming widespread and used where the avg person can see it

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          4 months ago

          LLMs don’t “understand” anything. They’re just very good at making it look like they sort of do

          If they’re very good at it, then is there functionally any difference? I think the definition of “understand” people use when railing against AI must include some special pleading that gates off anything that isn’t actually intelligent. When it comes to artificial intelligence, all I care about is if it can accurately fulfill a prompt or answer a question, and in the cases where it does do that accurately I don’t understand why I shouldn’t say that it seems to have “understood the question/prompt.”

          They also tend to have difficulty giving the answer “I don’t know” and will confidently assert something completely incorrect

          I agree that they should be more capable of saying I don’t know, but if you understand the limits of LLMs then they’re still really useful. I can ask it to explain math concepts in simple terms and it makes it a lot easier and faster to learn whatever I want. I can easily verify what it said either with a calculator or with other sources, and it’s never failed me on that front. Or if I’m curious about a religion or what any particular holy text says or doesn’t say, it does a remarkable job giving me relevant results and details that are easily verifiable.

          But I’m not going to ask GPT3.5 to play chess with me because I know it’s going to give me blatantly incoherent and illegal moves. Because, while it does understand chess notation, it doesn’t understand how to keep track of the pieces like GPT4 does.

          • throwwyacc@lemmynsfw.com
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            If you can easily validate any of the answers. And you have to to know if they’re actually correct wouldn’t it make more sense to just skip the prompt and do the same thing you would to validate?

            I think LLMs have a place. But I don’t think it’s as broad as people seem to think. It makes a lot of sense for boilerplate for example, as it just saves mindless typing. But you still need to have enough knowledge to validate it

            • A_Very_Big_Fan@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              4 months ago

              If I’m doing something like coding or trying to figure out the math behind some code I want to write, it’s a lot easier to just test what it gave me than it is to go see if anyone on the internet claims it’ll do what I think it does.

              And when it comes to finding stuff in texts, a lot of the time that involves me going to the source for context anyways, so it’s hard not to validate what it gave me. And even if it was wrong, the stakes for being wrong about a book is 0, so… It’s not like I’m out here using it to make college presentations, or asking it for medical advice.

    • kromem@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k=5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

      Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state.

      So you already have research showing that GPT LLMs are capable of modeling aspects of training data at much deeper levels of abstraction than simply surface statistics of words and research showing that the most advanced models are already generating novel and new outputs distinct from anything that would be in the training data by virtue of the complexity of the number of different abstract concepts it combines from what was learned in the training data.

      Like - have you actually read any of the ongoing actual research on the field at all? Or just articles written by embittered people who are generally misunderstanding the technology (for example, if you ever see someone refer to them as Markov chains, that person has no idea what they are talking about given the key factor of the transformer model is the self-attention mechanism which negates the Markov property characterizing Markov chains in the first place).

      • assassin_aragorn@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        4 months ago

        This is like asking someone to prove God doesn’t exist. The burden of proof is on you to show how humans are effectively over hyped plagiarists. You’re the one making the claim.

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Maybe work on proving “AI” is actually a technological advancement instead of an overhyped plagiarism machine first.

          This statement has the implicit claim: “AI” is actually an overhyped plagiarism machine instead of a technological advancement. The burden of proof is on them to show this claim. Additionally, this statement contains the implicit claims that: “AI” is not in fact intelligence, real intelligence is not an overhyped plagiarism machine. The burden of proof lies with them for these claims as well. My question was merely to highlight this existing burden.

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      4 months ago

      instead of an overhyped plagiarism machine first.

      If I paint an Eiffel Tower from memory, am I plagiarizing?

      If it’s not plagiarism when humans do it, it’s not plagiarism when a machine does it.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        4 months ago

        Of course it is. A machine is not a human.

        If you want to make this argument, then AI companies should be required to treat their AI models like employees. Paid for 40 hours a week of work, extra for overtime.

        If it’s human to have “memory” that isn’t subject to plagiarism, then it’s human enough to be paid hourly.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          I don’t need to be paid to make a painting, I’ll just do it for fun or because a good friend wanted it.

          Why does a machine doing something that I do for fun constitute plagiarism?