CHECKERS SOLVED??? - DEGREES OF SOLVABILITY

Topics about Checkers programming and other projects.
I hope to keep users informed of the ACF website projects.

Re: CHECKERS SOLVED??? - DEGREES OF SOLVABILITY

Postby Krzysztof Grzelak on Wed Aug 17, 2016 2:18 pm

Thank you Mr. Ed.
Krzysztof Grzelak
 
Posts: 80
Joined: Tue Aug 09, 2016 3:25 am
Location: Poland

Re: CHECKERS SOLVED??? - DEGREES OF SOLVABILITY

Postby megamau on Thu Aug 18, 2016 6:52 am

Sune wrote:I invite you to read this article https://dke.maastrichtuniversity.nl/m.w ... hapter.pdf and search for the word heuristic.
Again I invite you to do a search for transposition table in the above article. Also search for the word threshold I might have misunderstood it but it seems to me that this is futility pruning.


They use the word heuristic in a different sense. PN is an heuristic "search strategy", meaning that it tries to guess in which order to expand the three to achieve the minimum effort in proving the position.
It is not proven that PN search will always achieve the minimum effort or always beat alphabeta.
The "results" of a completed PN search, however, are exact values and not heuristics. It is proven that a completed PN search proves or disproves the root node.

Regarding the transposition table, being cited in the van den Herik and Winands paper does not mean that it is used in Chinook.
It is also possible to positively avoid collisions, by using a large enough key. The easiest way would be to write directly to Schaeffer.

Regarding the threshold, I have not fully studied the paper, but it seems to me that it is a "search strategy" improvement, to find the "most promising" path of proof.
If a node becomes "too costly" to prove, it is abandoned in favor of other nodes.
However, the root node has to be fully proved regardless, and in fact for the root node the threshold is set at infinite
User avatar
megamau
 
Posts: 15
Joined: Sun Aug 07, 2016 8:08 pm

Re: CHECKERS SOLVED??? - DEGREES OF SOLVABILITY

Postby Ed Trice on Thu Aug 18, 2016 10:05 am

megamau wrote:It is also possible to positively avoid collisions, by using a large enough key.


The size of the key is not nearly as important as:

1. The number of entries in the hash table if it is not a "perfect hashing" scheme
2. The randomness of the process that generates the key. More random = better hashing and fewer collisions

As a counter-example to demonstrate the point, imagine a hash table with a single entry. No matter how good the key you select, you are guaranteed that every entry, except for one, is a collision. Everything will map to the one location. The first entry is the only one that is not a collision.

There are Perfect Hashing Functions, which will always map an entry into the hash table without duplicating the index, and, therefore, produces 0 collisions.
There are also Minimal Perfect Hashing Functions which are perfect hash functions that also have no unused entries.
In a sense, an Indexing Function is a Minimal Perfect Hashing Function without the overhead of creating a hash number.

If you have enough RAM, then the number of entries in your hash table should be a prime number that is twice as large as the number of entries you need to store. That way, your hash table will be less than 50% full, greatly reducing the chance of collisions, and allowing for faster collision resolution when collisions do occur.
Ed Trice
 
Posts: 64
Joined: Wed Aug 10, 2016 9:16 pm

Previous

Return to Projects/Programming

Who is online

Users browsing this forum: No registered users and 1 guest

class=