If you've ever played a game on Chess.com, finished it, and then rushed to the analysis board — you know the feeling. You're scrolling through your moves, and suddenly a little cyan diamond appears. Brilliant move. Your heart does a small thing. You screenshot it. You maybe send it to a friend.
And then, if you're anything like me, you start to wonder: did I actually play a brilliant move? Or did the computer just label something "brilliant" because it fit a pattern it was trained to reward?
This question has been bothering club-level chess players for a while now. The "Brilliant Move" feature — popularized by Chess.com's analysis engine — is supposed to celebrate moments of genuine chess genius. But there's growing frustration among intermediate players that the algorithm doesn't quite understand how club chess is actually played. And the more you look at it, the more you realize it gets some pretty fundamental things wrong.
First, What Even Counts as a Brilliant Move?
Chess.com introduced the brilliant move classification as part of its game review system. The idea is that a "brilliant" move isn't just the best move — it's a move that's hard to find, often involves a sacrifice or an unexpected idea, and requires real calculation or intuition to spot.
The engine looks for moves where:
- The move is the best (or one of the best) available
- Alternative moves are significantly worse
- The move involves a non-obvious concept (like a piece sacrifice, a quiet move in a tactical position, or a zwischenzug)
In theory, that sounds great. In practice, for club players rated somewhere between 800 and 1800, the system creates some strange results — and sometimes misses the point entirely.
The Algorithm Thinks Sacrifices Are Always Clever
Here's one of the most common complaints you'll hear from club players: the brilliant move badge gets handed out almost any time you sacrifice a piece, even if you didn't fully calculate it, even if it was kind of obvious, and even if your opponent was going to blunder it back anyway.
Say you're in a position where you sacrifice a bishop for two pawns and an open file. Your engine evaluates the position afterward as roughly equal or slightly better for you. The algorithm flags it as brilliant.
But here's the thing — at club level, you might have played that sacrifice because it looked dangerous, not because you calculated five moves ahead that it was objectively sound. You were going on feel. Your opponent might have also been in time pressure, or they simply aren't strong enough to refute it.
A brilliant move in grandmaster chess is brilliant because it holds up against the best possible defense. A "brilliant" move at the 1200 level often only works because both players aren't playing optimally. The algorithm can't really tell the difference, because it evaluates positions in isolation — not relative to the players involved.
It Doesn't Understand the Difference Between Courage and Calculation
There's a specific type of club player move that gets brilliant badges all the time, and it drives experienced coaches a little bit crazy. It's the speculative attack.
You launch your kingside pawns forward. You sacrifice a knight. Your opponent panics, plays poorly, and you win. The computer reviews the game, sees the knight sacrifice was "the best move" at that moment, and awards the cyan diamond.
But any coach watching the game live would tell you: that wasn't brilliant. That was brave, or maybe reckless, and it worked because your opponent didn't know how to handle it. The computer is rating your move in a vacuum. It doesn't know you were low on time. It doesn't know your opponent was nervous. It doesn't know you were just following a pattern you'd seen before without deeply calculating it.
Brilliance in chess has always implied something about the process, not just the result. Magnus Carlsen playing a quiet rook move that wins because he saw a forced sequence 12 moves deep — that's brilliant. You pushing your g-pawn because it felt aggressive and it happened to be the engine's top choice — that's different.
The algorithm can't measure intent or process. It can only measure outcome. And that's a real limitation.
Quiet Moves Never Get the Credit They Deserve
Here's the flip side: some of the genuinely hardest moves to find at club level almost never get flagged as brilliant.
In chess, there's a special category of move called the "quiet move." It doesn't capture anything. It doesn't give check. It just... repositions a piece. And sometimes, these are the hardest moves in chess to find, because nothing in the position is screaming at you to play them.
Imagine you're in a complex middlegame. There's chaos everywhere. And the best move is to simply slide your rook back one square — not to attack, not to defend, just to prepare something subtle three moves later. That kind of move requires genuine positional understanding. It requires you to look at the board differently than 95% of players at your level would.
Does it get a brilliant badge? Usually not, because the engine doesn't see the rook move as particularly "risky" or "sacrificial." The position evaluation doesn't swing dramatically. There's no moment of tension resolved. It just quietly improves your position.
This is one of the deepest disconnects between the algorithm and real chess understanding. The hardest moves aren't always the flashiest ones. And brilliant move detection, as it's currently designed, heavily biases toward drama over depth.
The Rating Relativity Problem
Let's talk about something that doesn't get discussed enough: rating relativity.
A move that's genuinely brilliant for a 900-rated player might be completely obvious for a 1700-rated player. And a move that's merely "good" for a 2400 GM might be completely incomprehensible for a club player.
The brilliant move algorithm doesn't really account for this. It runs on an absolute scale — the engine says "this move was hard to find and was the best option, so it's brilliant." But "hard to find" is always relative to something. Hard to find for whom?
If you're a 1400-rated player and you play a correct king walk in an endgame — something that maybe 20% of players at your level would find — that might be the hardest, most impressive thing you've done all year. The engine might label it as just "good" or not even comment on it, because it's not a sacrifice or a dramatic shift in evaluation.
Meanwhile, another 1400-rated player throws a piece at the king, the opponent misses the refutation, and the attacker gets a brilliant badge for a move that was objectively dubious but happened to work.
What About the "Missed Brilliant Moves" Problem?
Here's something that comes up constantly in chess forums: players who find engine-top moves that aren't flagged as brilliant.
You play a perfectly timed pawn break. The engine agrees it's the best move in the position. You've been building toward it for 15 moves. Your opponent had no answer. But the evaluation only shifts by half a pawn, so... no badge.
Compare that to a chaotic position where you stumble into a piece sacrifice that happens to be the top engine choice — evaluation swings by 2 pawns — and suddenly it's brilliant.
The algorithm seems to weight evaluation swings heavily in deciding what counts as brilliant. But evaluation swings are a proxy for importance, not for difficulty or creativity. A game can be decided just as much by a series of quiet, correct moves as by a single dramatic sacrifice. The algorithm has a hard time seeing that.
The Psychological Effect on Club Players
This isn't just a theoretical complaint. There's a real psychological effect happening.
When players get brilliant move badges for speculative sacrifices that happened to work, they learn — consciously or not — that this is how chess should be played. They start throwing pieces. They start attacking early. They stop respecting defense. They chase the feeling of the cyan diamond.
Coaches at the club level have noticed this. Players who spend a lot of time reviewing their games through automated analysis systems start to develop a skewed sense of what good chess looks like. They begin to undervalue solid play. They get frustrated when careful, correct moves don't get celebrated.
And on the flip side, when a player makes what is genuinely a difficult and important decision — say, accepting a slightly worse position to simplify into a winning endgame — and the algorithm ignores it, they can feel deflated. Like they did something boring. When actually, they did something hard.
So What Would a Better System Look Like?
This is where it gets interesting. A few ideas have been floating around in the chess community:
- Rating-adjusted brilliance. The system could factor in your rating and your opponent's rating when deciding what counts as brilliant. A move that only 15% of players at your level would find could be flagged differently than a move that any 2000+ player would see immediately.
- Recognizing positional brilliance. Give credit to quiet moves, strategic regroupings, and long-term pawn structures when the engine identifies them as key turning points in the game — even if the evaluation shift is small.
- Process over outcome. This is harder to implement, but there's potential in tracking whether a move was genuinely hard to find based on the complexity of the position, not just whether it involved a sacrifice.
- Better labeling. Instead of just "Brilliant," have categories like "Precise," "Brave," or "Instructive" — each celebrating a different kind of good chess thinking.
The Bigger Picture
None of this is meant to say the brilliant move feature is useless. It genuinely does find a lot of cool moments in games. When you're reviewing a game and you see a brilliant badge, it's often worth stopping and understanding why the engine liked that move so much. That's valuable.
But the algorithm was built primarily with pattern-matching in mind — and chess, especially at the club level, is much more about understanding than pattern. The engine sees what happened. It doesn't know what you were thinking, what you were afraid of, or what you barely managed to find in time pressure.
The result is a system that rewards dramatic but sometimes unsound chess, ignores difficult positional play, and doesn't scale its judgments to the level of the players involved.
For club players who want to actually improve, the lesson is this: don't let the badge be the goal. A game full of "good" and "excellent" moves with no blunders is often far more impressive — and far harder — than a game with one brilliant badge surrounded by inaccuracies.
Chess engines are extraordinary tools. But they measure the quality of moves, not the quality of chess players. That gap is where the algorithm gets it wrong — and where human understanding still has something the computer doesn't.
If you found this useful, consider sharing it with a chess friend who's chasing brilliant badges instead of improving their endgame. They might not thank you immediately, but their rating will.