Chess.com · Product Design Case Study

What if Game Review actually taught you something?

Chess.com tells 200M players what went wrong. It never tells them why. I designed an AI layer that does.

Company
Chess.com
Surface
Game Review (Web)
Role
Product Design + Engineering
Tools
Claude Code, chess.js, HTML/CSS/JS
Timeline
1 day

"Rd1 is a mistake." Cool. But why?

Chess.com's Game Review classifies every move from brilliant to blunder. But for the 100M+ players rated under 1500, a label without an explanation is a dead end. You see your mistakes. You can't understand them. You make the same ones next game.

The review tells you WHAT happened. It never tells you WHY.

Chess.com's current Game Review See it live on Chess.com →
"Rd1 is a mistake" with an eval score. That's it. No explanation. No reasoning. Just a label and a number.

15+ players saying the same thing

I searched Chess.com's forums for complaints about Game Review. The pattern was overwhelming. Players at every level asking for the same thing: explain WHY, not just WHAT.

"I forked the queen and rook with a knight and it was an 'inaccuracy'. The machine never gives any reasoning, so it is pointless without context."
fjblair View thread
"At the rating I'm on its severely lacking in any insight to help me improve."
putshort View thread
"Are we all supposed to take from these game reviews that we should try to play more like an engine rated 3400 than a human?!"
thing50 View thread

Three directions, one insight

I explored three approaches: a conversational chat panel, lightweight smart annotations, and a guided lesson mode. Each had clear tradeoffs between depth and friction.

The chat panel was powerful but overwhelming for casual players. The annotations were easy to digest but static. The guided mode taught the most but felt like a separate product.

The insight: combine them with progressive disclosure. Let the player go as deep as they want, but never demand it.

Progressive disclosure: learn at your own pace

Every mistake gets a one-liner explanation. If you're curious, expand into a conversation. Over time, the system spots your patterns. Each layer is opt-in. The casual player gets value without touching the chat. The motivated improver digs deeper.

1
AI One-Liner
Every classified move gets a plain-English explanation right in the coach bubble. "Your rook was pressuring f7. Moving to d1 abandons that attack." Zero effort, always there.
2
Conversational Follow-Up
Tap "Why?" to expand a chat panel. Ask follow-ups, get the better alternative explained, explore lines. Powered by an LLM interpreting Stockfish analysis in human terms.
3
Pattern Recognition
After the review, the system surfaces recurring themes: "You moved your rooks away from active squares 3 times this game." Links to focused lessons. Builds real improvement over time.
Interactive prototype. Use arrow keys to navigate moves, click "Why?" to expand the chat.

How this could work: LLM + Stockfish

The AI doesn't play chess. It translates chess. Stockfish provides the engine evaluation and best lines. An LLM interprets that analysis into plain-English explanations adapted to the player's rating level.

At 800 rated: "Your rook was doing something useful. Don't move pieces that are already helping."

At 1500 rated: "Rd1 releases pressure on f7 and allows Black to consolidate with ...e5, equalizing the position."

Same engine data, different explanations. The architecture is: Stockfish eval → LLM context window → rating-adapted natural language.