Chess.com tells 200M players what went wrong. It never tells them why. I designed an AI layer that does.
Chess.com's Game Review classifies every move from brilliant to blunder. But for the 100M+ players rated under 1500, a label without an explanation is a dead end. You see your mistakes. You can't understand them. You make the same ones next game.
The review tells you WHAT happened. It never tells you WHY.
I searched Chess.com's forums for complaints about Game Review. The pattern was overwhelming. Players at every level asking for the same thing: explain WHY, not just WHAT.
I explored three approaches: a conversational chat panel, lightweight smart annotations, and a guided lesson mode. Each had clear tradeoffs between depth and friction.
The chat panel was powerful but overwhelming for casual players. The annotations were easy to digest but static. The guided mode taught the most but felt like a separate product.
The insight: combine them with progressive disclosure. Let the player go as deep as they want, but never demand it.
Every mistake gets a one-liner explanation. If you're curious, expand into a conversation. Over time, the system spots your patterns. Each layer is opt-in. The casual player gets value without touching the chat. The motivated improver digs deeper.
The AI doesn't play chess. It translates chess. Stockfish provides the engine evaluation and best lines. An LLM interprets that analysis into plain-English explanations adapted to the player's rating level.
At 800 rated: "Your rook was doing something useful. Don't move pieces that are already helping."
At 1500 rated: "Rd1 releases pressure on f7 and allows Black to consolidate with ...e5, equalizing the position."
Same engine data, different explanations. The architecture is: Stockfish eval → LLM context window → rating-adapted natural language.