The real answer is compute power. At the moment it’s very expensive to run the computations necessary for big LLMs, I’ve heard some companies are even developing specialized chips to run them more efficiently. On the other hand, you probably don’t want your phone’s keyboard app burning out the tiny CPU in it and draining your battery. It’s not worth throwing anything other than a simple model at the problem.
And how can autosuggest / autocorrect be so bad with so much training data
Did you ever see how an average person types? It’s not the amount of data that is the problem. We have too much dumb data!
The real answer is compute power. At the moment it’s very expensive to run the computations necessary for big LLMs, I’ve heard some companies are even developing specialized chips to run them more efficiently. On the other hand, you probably don’t want your phone’s keyboard app burning out the tiny CPU in it and draining your battery. It’s not worth throwing anything other than a simple model at the problem.