Wednesday, November 28, 2018

SashChess

     Earlier this year in Stockholm some of the best engines vied for the world computer championship. One thing that struck me about this tournament was that they are not conducted by firing up the engines and letting them battle it out on their own.
     In this tournament the engines had operators who transferred the engine's selected move to an actual board and punched a clock. This affected the results. In one game Komodo reached a theoretical draw (both engines were using tablebases) and its opponent's operator offered a draw which the Komodo operator refused. His refusal had nothing to do with the position...it was because the opposing operator had been too slow in playing the moves on the board and punching the clock. As a result Komodo played on and won on time.
     In another game, instead of agreeing to a draw in a blocked position, the operators continued to let the engines shift pieces before finally agreeing to a draw in over 160 moves. 
     In this engine world championship tournament humans still have some input that can have an effect on the final results so I don't put much stock in the outcome.  Also, where were Stockfish, Houdini, Fire and Deep Shredder? 
     In the end Komodo and GridGinkgo were tied with Komodo winning the playoffs. 
1-2) GridGinkgo and Komodo 5.0 
3) Jonny 4.0 
4-5) Chiron and Booot 3.5 
6) Shredder 3.5 
7) Leela Chess Zero 2.0 
8) The Baron 1.5

     Komodo won the event by playing much like human GMs: in the opening it avoided theoretical main lines and played for strategically unbalanced situations. Commentator GM Harry Schussler made the observation that engines sometimes make ugly moves because they are finding exceptions to the general rules that we humans are familiar with. 
     In correspondence chess one of the top Centaurs in the U.S. is Wolff Morrow (aka FirebrandX). The interesting thing is that he has been using the same computer to do his analysis for the last ten years which proves that while it helps, success in modern correspondence play isn't always about computing power. 
    Morrow claims that if you just buy a big powerful computer and play only engine moves you could probably get a decent ICCF rating, but it won't get you anywhere near the top level. That's true. 
     Several years ago when I started playing chess via email on IECG it was at my CCLA rating which was somewhere in the 2050-2100 range. I was unaware that engine use was allowed on IECG and so in my first tournament scored +0 -4 =2 and lost about 100 rating points! Later, when I joined Lechenicher SchachServer, the replacement for IECG, it was at the IECG rating and in the years since then my rating hasn't varied much. That's because everybody uses pretty much the same engines, so getting a plus score and gaining rating points is difficult unless you want to devote serious time to it. And, as Morrow pointed out, it's a big temptation to get lazy if you play too many games at the same time. 
     Morrow advises that you need to work with the engine by suggesting plans, play those positions out and evaluate the results then go back and look at alternatives and even have the engine consider moves it has rejected. His method is to write down candidate moves by type...Pawn moves, Knight moves, etc. Then after they have all been evaluated, decide on the best move which may not have the engine's highest evaluation. 
     That sounds tedious, but simple enough. The question is, unless you are a pretty strong player, how do you know which move is best without relying on the engine evaluation? I have no idea! 
     He also advised against playing the same openings all the time because if you do, opponents will be laying in wait and have opening surprises prepaid. And, these top level correspondence players spend a ga-zillion hours doing opening research. Of course, he's talking about top level correspondence play. I seriously doubt my opponents in rapid play events on Lechenicher SchachServer even bother looking at my past games. 
     In these rapid games (10 Basic plus 1 day per move, no vacations) you often play a game in a few months and some players have 20-50 games going, so they can't spend much time on each individual game. That means you can get away with playing opening lines you normally couldn't play. I have even played 1.f3 on occasion in these rapid events, but that has proven to be a bit too extravagant. However, openings like the Urusov Gambit and even 1.a3 have not given bad results. 
     In the last one of these rapids I entered the plan was to test the SugarX Pro engine. On the CCLR 40/40 rating list its rating is nearly identical to Stockfish 9 and in individual games its minus 1 against Stockfish and against Komodo 12, Sugar is plus 5. I didn't like the positions I was getting with Sugar and switched to Stockfish, but it may have been too late. 
     Another interesting engine is ShashChess which is one of many engines based on Stockfish. The engine is an interesting attempt to apply Alexander Shashin's theory based on his book Best Play: A New Method For Discovering The Strongest Move to Improve which came out in 2013. 
     Depending on the position, the engine has algorithms based on the play of Tal, Capablanca and Petrosian as well as "mixed" ones. In other words, it plays differently, based on the type of position it is analyzing. 
     The book by Shashin, physicist and master, is based on his 30 years worth of research on the elements of the game. He breaks down the position into mathematical ratios that compare the fundamental factors of material, mobility, safety and space which is supposed to reveal the proper plan and the mental attitude to adopt in light of what’s happening on the board. 
     He's not the first to suggest the concept of space, time and force.  Znosoko-Borovsky published The Middle Game in Chess in which he focused on those elements decades ago.  Back in the 1950s, Larry Evans also did a hack job on the same subjects.
      Relying on the games of three world champions with distinctive playing styles Sashin explains how it works in practice. How well does this system work? In Amazon book reviews, one reviewer said the book is very deep and hard to follow and added that the system may not be applicable to tournament games when you are under time constraints. Another reviewer more or less agreed. 
     To me the book sounds like a modern computer version of Horowitz' Point Count Chess in which he listed positional factors that were worth points and anti-positional factors that were negative points. After all the adding and subtracting was done you knew who was better, but not how to utilize your advantage. 
     I suppose this is at the very basic level the same approach engines use. They only trouble is engines calculate a lot faster than humans do, so they can evaluate more positions and come up the moves with the highest point count a lot faster. Another problem with these systems where you add and subtract positional features is that they are meaningless if there's a tactic lurking in the position...miss that and it doesn't matter how many positional plusses you have.
     As a quick experiment I played a 16-game match at 4 minutes per side pitting Stockfish 9 against ShashChess and was somewhat surprised. Stockfish scored +7 -4 =5, but then I discovered many of the decisive games were lost on time by both engines, so the result was meaningless. 
     Evaluating the final positions in all the games told a different story. Based on Stockfish's own evaluations, I "adjusted" the results and SashChess had winning positions in two games and the rest were almost dead even. Not very scientific, just interesting. Here is one of the games won by SashChess 

No comments:

Post a Comment