On the CCLR 40/40 list Deep Shredder 12 is pretty far down in 26th place with 3029 rating. Shredder 4 Classic dates back to 2009, so by chess engine standards it's pretty long of tooth, but Shredder Classic has the same engine and the same knowledge as Shredder 12 and Deep Shredder 12, but its calculations are not as fast or as efficient.
It has four different versions of the engine: Solid, Beancounter, Gambit and Kamikaze. I cannot say exactly how these engines differ because I've never seen an explanation.
The program also has what is called a Triple Brain which combines the strengths of two different engines to get the optimum for analyzing. While two engines are analyzing, a third engine will decide which move or analysis is better. In its own search window the Triple Brain will display a value between 0 and 100 percent which indicates how sure about its choice the Triple Brain is.
This feature, from what I have read, gives poor results and, really, is not particularly useful. Think about this...in order to decide which move was the best wouldn't the Triple Brain engine have to be stronger than either of the other two? If it wasn't, how would it know which move was better?!
For the Triple Brain to work best you should combine two engines with about equal playing strength but different playing styles. I tried a little test after I ran across the a position from a game that was touting the merits of Fritz 15 when it found 31.Nd5 against Black Mamba. Also, what other engines could find it?, I asked. Three minutes was the time limit.
White to move |
Per the instructions I matched up the Beancounter and the Gambit versions and neither found the correct move. Triple Brain selected Beancounter's 31.Ke1, but it wasn't too sure that was the best move, only 6 percent.
In the match up between Solid and Kamikaze the Triple Brain went with Kamikaze's move, but it was even less sure about the analysis. Solid did select 31.Nd5 and favored white by 0.45. According to Stockfish the evaluation should be closer to 2.25.
How did the "modern" engines do? On a single core with a 3-minute time limit the following engines passed the test:
Stockfish 7 (found it instantly)
Houdini 1.5 (15 seconds)
Rybka 2.3.2 (20 seconds)
Gull 3 (25 seconds)
Fritz 12 (60 seconds)
Komodo 8 (75 seconds)
Zappa 1.1 (almost 180 seconds)
These engines failed: Crafty 35.Rf7 (0.79) and SmarThink 31.Qc4 (0.87). Giraffe failed miserably thinking black was winning. 31.Ke1 (-2.79).
I also tested the Triple Brain in one position from analysis in the previously mentioned Forgacs-Tartakower game in which Stockfish and Komodo differed on their move selections. Triple Brain went with Stockfish, but it wasn't very sure about its choice because after 10 minutes it showed only 5 percent certainty!
What's this prove? Shredder Classic 4 sells for 30 Euros ($34.15), but I can see no reason to make the purchase because the engines just aren't that strong. Of course, you can add other engines so basically you are just buying the GUI which, when you think about it, is what you're doing when you buy most other programs because you are going to add Stockfish to whatever engines came with it. Any free GUI and one of the free engines will pack more punch for zero Euros ($0.00), so I do not see Shredder Classic 4 as being such a great value. I did not test it, but on a couple of forums some posters claimed it was not that great at handling databases.
It also shows that if you are doing any analysis you do need to give the engine some time to think. See my post Even Engines Need Time to Think. Even then in some positions you just cannot be 100 percent sure what the best move is unless you're rated somewhere North of 2400.
No comments:
Post a Comment