Price ranges from 25 Euros (32 & 64 bit) ($31) to 35 Euros (high-end version) ($44). Opening books are sold separately for around $38...pricey!!
The development team is Domenico Lattanzi, a computer scientist, cirst Category player and qualified juvenile chess instructor, Andrea Manzo, a computer scientist living and working in France. He is a master of the Italian Chess Correspondence Association (ASIGC) and has achieved his first norm of IM. Currently playing in the Final of the 15th World Cup of the ICCF and Roberto Munter, an industrial consultant.
ATOMICC Computer Testing Blog has run several matches pairing Vitruvius against several well known engines:
1 Vitruvius 1.11C HEM x64 1CPU +53/=40/-7 73.0/100
2 Fritz 13 +7/=40/-53 27.0/100
1 Vitruvius 1.11C HEM x64 1CPU +25/=54/-21 52.0/100
2 Critter 1.4a 64-bit SSE4 1CPU +21/=54/-25 48.0/100
1 Houdini 2.0c Pro x64 1CPU +37/=44/-19 59.0/100
2 Vitruvius 1.11C HEM x64 1CPU +19/=44/-37 41.0/100
1 Rybka 4.1 SSE42 x64 1CPU +25/=59/-16 54.5/100
2 Vitruvius 1.11C HEM x64 1CPU +16/=59/-25 45.5/100
Rating List
All engines use 1 CPU.
It would appear that this is a very good engine, but not quite up to the level of the very best. On the other hand, if it does play ‘human-like’ as the authors claim it could be, for a lot of players, an engine worth investing in.
Is there a 'plays like a human' test that could be applied to Vetruvius? BTW No offence intended but I found your font and colour scheme in the blog about Vetruvius very hard to decipher; but then I'm red/green colour blind.
ReplyDeleteAlastair
The font color was rather dark so I changed it to a lighter color...hope that helps. I am not aware of any test positions that would determine how human-like an engine plays. It appears that this is a subjective judgment. Anybody have a suggestion?
ReplyDeleteI've just found a test set 'jig' which was run by the authors of Vitruvius (see Human Test on their website) where a group of engines tried to find the best move in 25 standard positions. Vitruvius was the winner. So positional problem-solving is one way of determining human play, perhaps. 25 positions seems rather a small set for any confident conclusion, however. PS The font is now easier to read. many thanks.
DeleteAlastair
The only way I have to gauge that is to compare the analysis of engines versus the actual move of grandmaster in a classic game. Fritz for example will let you analyze a game with multiple engines at a time. So you could have it run five engines for a game at a time and see how it compares to GM actual moves. Probably want to do it for a number of games. A little labor intensive. There is no "test set" that I am aware of.
ReplyDeleteIf you want to try it without paying the earlier versions of DeepSaros are the predecessors.
My experience has been that when engines go commercial there is usually not much of an increase in strength, so not much is gained by actually paying for the commercial version. The ATOMICC matches (time control was 10 minutes + 10 seconds) in which Houdini Pro walloped DeepSaros 71-29 probably indicates that at longer time controls Houdini 1.5 is still the best engine.
ReplyDeleteHi to all,
ReplyDeleteVitruvius was developed almost exclusively for chess analysis. Following the request of many customers, we have also developed a version with a more conservative style for match 'computer Vs computer', but it distorts the original style of the progect.
Many strong chess players use Vitruvius for their preparation, including correspondence chess players.
The current World Champion, GM Marjan Semrl, believes that "the future" is made up of the positional sacrifice of quality. For this reason, we have him provided Vitruvius 1.11H (where the "h" stands for "Human"). He was very pleased with our engine, and is giving us a serious feedback to help us further improve the product.
Roberto Munter
(programmer of Vitruvius)
Oh, i forgot, congratulations on your beautiful site!
ReplyDelete