The author is the lady who runs the “web3 is going great” website.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    2 months ago

    Tech talk often revolves around the ‘what ifs’. Every new development prompts discussions about potential harm. While it’s crucial to address these concerns, trying to put toothpaste back in the tube is unrealistic. Regulation is key to mitigate problems, but erasing technology altogether is not the solution. We can see an example of this in action in China where games and social media are being regulated to reduce harmful effects on minors, and it works. China isn’t banning games and social media, they’re finding a middle ground that leads to sensible use of the tech.

    • loathesome dongeater@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      Though the title says AI the authors means generative AI and specifically LLMs when talking about it. If you read the post you will find that her take is very measured and mostly talks about how AI companies overhype LLMs and AI in general (Anthropic CEO recently said that AI that can survive and replicate in the wild is possible in the near term, the drivel that is https://openai.com/charter).

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Oh I completely agree that there’s a ton of hype around this stuff, and I expect the bubble is going to burst as more people start realizing the limitations trying to use it. So yeah, overall I agree with most of what the article is saying.

  • tarbeez@lemmygrad.ml
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 months ago

    I’ve been forced to reckon with generative LLMs lately. For me, it is easy and natural to think in abstract terms when it comes to programming, and related things like setting up and structuring a database etc, but I’ve always hated doing the work. It has always been something I’ve forced myself to do in order to build something, for work or whatever. I find it repetitive and boring.

    Now I’m finding that I can use code helpers built on generative LLMs to get things done so quickly, and to do things I wouldn’t even attempt before. I’ll be honest, I’ve taken some pleasure in solving a problem more cleanly than people who are much better at coding (and who enjoy it as an intellectual challenge etc). I’ve been able to skip their “gatekeeping” because I can just implement the solution I want by being very specific in my instructions to the chatbot, understanding every step, but having “it” do the menials tasks of working out the internal logic and syntax etc. I feel like it’s given me a chance to “prove” concepts I was previously unable to set into motion due to being unwilling/unable to work out the technical details of the components.

    The linguist in me is conflicted. The formalisation of language (in combination with the massive and arguabily grossly unethical data collection) that these programs are built on does not at all reflect my views on language, what it “is” (both in and out of “context”) or what a fruitful and inclusive line of inquiry for linguistics as a field would/should be. But I’ll be damned if chatbots aren’t like having some super eager, super knowledgeable, beyond devoted sort of socially stunted helper. For controlled use (knowing exactly what you are building, and how), I find it just irresistible at the moment.

    Not sure if this is me crossing to the dark side or what.

  • Kultronx@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    worth the billions to the capitalists that they are spending on the compute power? probably not. It’s a little useful but only marginally.