The abstract core library of artificial intelligence for different two-player zero-sum games.
name | status | name | status | name | status |
---|---|---|---|---|---|
negamax algorithm | ✔️ | history heuristic | ✔️ | futility pruning | ❌ |
alpha-beta pruning | ✔️ | null window search | ✔️ | tactical & counter moves | ❌ |
iterative deepening search | ✔️ | quiescence search | ✔️ | internal iterative deepening | ❌ |
principal variation | ✔️ | best node search | ✔️ | enhanced transposition cutoff | ❌ |
transposition table | ✔️ | late move reduction | ❌ | probcut | ❌ |
killer heuristic | ✔️ | null move heuristic | ❌ | parallel search tree | ❌ |
First you need to model your specified board game.
Move - define the possible moves (for example with source or/and target cells or/and with piece type, ..)
- the move shouldn't contain information about which player moves
- you must override
GetHashCode
andEquals
methods, because 'reference equals' isn't enough
Position - implement IPosition<TMove>
interface (typically with game board, move history, who is the next player, ..)
- Sometimes
Identifier
is only a very strong hash key, because the number of different positions is greater than 2^64 (for example chess) IsQuiet
can reduce horizont effect, but initially can be true by default- Computation of static evaluation value is difficult, but very important for optimization
Rules - implement IRules<TPosition, TMove>
with specified game logics
InitialPosition
method always should create new instance
Then you can use the abstract solver component.
SolverConfiguration - implement ISolverConfiguration
interface
- Set the maximum thinking time per move
- parallelization config isn't used yet
SolverFactory - provides new alpha-beta solver intances
- for details see the tic-tac-toe demo in source code