The Illusion of Thinking & the Honesty of Apple’s AI Researchers
Some engineers at Apple recently came out with an interesting paper titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity." Available here: https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf The article digs into the limits of today’s so-called “Large Reasoning Models” (LRMs), or fancy versions of LLMs that write out detailed chains of thought before answering. The authors find that while LRMs “demonstrate improved performance on reasoning benchmarks,” they also hit a sharp “accuracy collapse beyond certain complexities” and even start to think less as problems get harder (p. 3). In other words, the more complicated the puzzle, the shorter and less effective the model’s reasoning becomes. It’s an impressively candid admission from Apple’s own AI research team. There’s a famous line that says it’s “hard to get a person to believe something when their salary depends on not believing...