The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
You’re using “machine learning” interchangeably with “AI.” We’ve been doing ML for decades, but it’s not what most people would consider AI and it’s definitely not what I’m referring to when I say “AI winter.”
“Generative AI” is the more precise term for what most people are thinking of when they say “AI” today and it’s what is driving investments right now. It’s still very unclear what the actual value of this bubble is. There are tons of promises and a few clear use-cases, but not much proof on the ground of it being as wildly profitable as the industry is saying yet.
No I’m not
Machine learning, deep learning, generative AI, object recognition, etc, are all subsets or forms of AI.
It doesn’t matter what people are “thinking of”, if someone invokes the term “AI winter” then they better be using the right terminology, or else get out of the conversation.
There are loads and loads of proven use cases, even for LLMs. It doesn’t matter if the average person thinks that AI refers only to things like ChatGPT, the reality is that there is no AI winter coming and AI has been generating revenue (or helping to generate revenue) for a lot of companies for years now.