The following legally played engine-assisted game was played on LechenicherSchachServer; I was using the Houdini 1.5 32 engine. I can’t be sure what engine my opponent was using, but it is clear that Fritz 12 does not do nearly as well evaluating this game as Houdini did.
What makes the game interesting is that it illustrates several points I have recently posted about concerning opening preparation in correspondence chess, trusting GM evaluations over engines, the difference in engine evaluations and not letting an engine run long enough, to name a few.
My opponent played the K-Gambit and I declined it on principle. When it became clear we were following the game between an unknown named Pulvermacher and Capablanca played in New York in 1907, I could not figure out why my opponent was following that game because it was a quick win for Capablanca.
It became clear when the engines found an improvement for White at move 9 and my opponent was banking on the improvement to give him the advantage. Some preliminary analysis conducted with Houdini had already alerted me to that point so I was not surprised.
Pulvermacher played 9.Bxg4 which was a losing blunder. Fritz and Houdini both recommended the much better 9.Nxg4. Houdini evaluated the position at a one P advantage for White. So how did White end up losing the game? As I pointed out in the post “Correspondence Play and Analyzing with an Engine,” engines may give a deceptive positional evaluation. I had to ask, “Did Capa misevaluate the position or was his positional judgment that Black stood better after 8…O-O no matter what White replied correct?” The answer was not immediately apparent but I followed my own advice and trusted Capa’s judgment. Subsequent analysis proved that he was indeed correct and that, as pointed out in the previous post, you cannot rely on an engine’s positional judgment especially when it has a material advantage. In this case Black’s advantage was not evident for several more moves.
It also shows the disadvantage of selecting an engine recommended move after only allowing it a minute or so to analyze. You must allow it to think much longer and then play out the variations to make sure there is not something beyond its horizon or possibly a better move that’s lost in the trimming process. Failure to do so will result in what happened in this game; White got false evaluations.
White erred by playing Fritz’ choice, after one minute of analysis, of 12.Bxb5.which supposedly left him an advantage of 1-1/3 P’s. On the other hand Houdini evaluated it at only 0.6 P.
We see that by move 17, that without making any “mistakes,” the position was rated equal by Houdini but Fritz initially thought White had an advantage of about 0.8 P. However, after a minute or two Fritz dropped the evaluation to about 1/3 P.
By the time we get to move 18 Fritz’ evaluations were totally whacked out while Houdini had a pretty good grasp of the real situation.
As I said, I have no way of knowing which engine White was using, but it is clear that Houdini’s evaluations are closer to the mark than those of Fritz 12. I also noticed Fritz’ evaluations tended to jump about a lot during its analysis process whereas Houdini’s evaluations remained “steadier.”
In any case this game makes a good case for doing opening research, allowing engines plenty of time and not just accepting their recommendation of what the best move is after only a few second’s thought, the necessity of actually playing through the engine’s recommended variation for a few moves and finally, trusting a GM’s positional judgment as to who stands better.
The game also shows why, no matter what one thinks of engine-assisted chess, there’s more to winning than just buying (or downloading free) software and letting the engine select your move. Titled CC players are also proven correct when they say you need to check things out with more than one engine and anybody who plays only engine moves in as ICCF titled event will lose.