Why artificial intelligence often struggles with math
In the school year that ended recently, one class of learners stood out as a seeming puzzle. They are hardworking, improving and remarkably articulate. But curiously, these learners — artificially intelligent chatbots — often struggle with math.
Chatbots such as Open AI’s ChatGPT can write poetry, summarize books and answer questions, often with human-level fluency. These systems can do math, based on what they have learned, but the results can vary and be wrong. They are fine-tuned for determining probabilities, not doing rules-based calculations. Likelihood is not accuracy, and language is more flexible, and forgiving, than math.