In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?

Writable lets teachers submit student essays for analysis by ChatGPT, which then provides commentary and observations on the work. The AI-generated feedback goes to teacher review before being passed on to students so that a human remains in the loop.

“Make feedback more actionable with AI suggestions delivered to teachers as the writing happens,” Writable promises on its AI website. “Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores.” The service also provides AI-written writing prompt suggestions: “Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs.”

  • Mandarbmax@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    You have it backwards. It isn’t that we operate like LLMs, it is that LLMs are attempts to emulate us.

    • jadero@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      9 months ago

      That is actually my point. I may not have made it clear in this thread, but my claim is not that our brains behave like LLMs, but that they are LLMs.

      That is, our LLM research is not just emulating our mental processes, but showing us how they actually work.

      Most people think there is something magic in our thinking, that mind is separate from brain, that thinking is, in effect, supernatural. I’m making the claim that LLMs are actual demonstrations that thinking is nothing more than the statistical rearrangement of that which has been ingested through our senses, our interactions with the world, and our experience of what has and has not worked.

      Searles proposed a thought experiment called the “Chinese Room” in an attempt to discredit the idea that a machine could either think or understand. My contention is that our brains, being machines, are in fact just suitably sophisticated “Chinese Rooms”.