I recently posted about Sinisa Loinjak’s winning of the LSS ‘World Championship’ and observed that because of his 51.1% wins 48.9% draws with no losses he clearly knows something about using chess engines that I and a lot of others don’t. One poster commented, “Maybe the best correspondence chess players still clearly know something about using chess knowledge a lot of others don't ?!” and then asked the question, “Have you checked the games? Are the critical maneuvers and tactics really that easy to spot by strong chess engines?” Actually, I had not checked any of his games, but the question made me decide to do just that to see if I could spot the critical points in the game and see where Loinjak varied from engine suggestions.I deliberately chose a game against a lower rated player figuring any mistakes and Loinjak’s exploitation of them would be easier to spot. My method was to first Blundercheck the game at 10 seconds per move with a setting of 0.30 being considered a ‘blunder.’ Black's 17th move registered as a blunder and there was one later in the game, but it was already lost anyway, so we can discount that move. The next thing was to utilize the Hot Meter function which is supposed to alert you major positional changes. Engines used were Houdini 2, Stockfish 3 and Critter 1.6a 64-bit.
One important thing to remember is that I did not spend a great deal of time analyzing this game (maybe 1-3 minutes per move), but the purpose was not to annotate the game as such, but to find critical points where Loinjak varied from engine suggestions.
Even so, it is possible that if he spent hours analyzing with a powerful computer, he would have actually been playing engine suggestions that I was unable to hit on in such a brief amount of time. And that in itself is a big difference between highly rated CC players and the rest of us. I have a dual core laptop, an opening book and database that came with the software and am not willing to spend several evenings analyzing every move.
I had a game some time back where the DB showed several games by 2500+ CC players that were all wins so I steered into one of those games, but in the course of analyzing it, I (or rather the engine) discovered an improvement for my opponent that lead to a significant advantage. Alas, my opponent discovered it too and I lost. This example shows how important opening analysis in CC is. Then there are games like the one I recently posted where I chose a line none of the engines recommended and ended up drawing. I’ve done that on more than one occasion and probably have lost more than I have won doing it!
Anyway, it seems the critical point in this game was Loinjak’s 17.g4 which none of the engines found. I let Houdini 2 work on it for three hours while I was out trimming hedges and Loinjak's move did not show up in the top 8 suggestions.. In fact, the evaluation after 17.g4 fell off a tiny bit, but as the game progressed you could see White’s advantage gradually creeping up. I even tried the ‘human-like’ Komodo-64 3 engine and it didn’t recommend 17.g4 either.
So, the question is, how did Loinjak decide on 17.g4? I don’t know, but if you think about it, a general P-advance on the K-side looks logical and I suppose that if the evaluation does not make a drastic change for the worse, then it makes sense to play it. Maybe I'll understand it better when I get to 2500....