AlphaBetaPlayer

com.phasmidsoftware.gambit.game.AlphaBetaPlayer
class AlphaBetaPlayer[P, S, M, Pl](me: Pl, depth: Int = ...)(using state: State[P, S], game: Game[S, M, Pl]) extends Player[S, M, Pl]

A generic alpha-beta pruning player. For more information, see https://en.wikipedia.org/wiki/Alpha–beta_pruning

Implements minimax search with alpha-beta pruning to a configurable depth. Alpha-beta pruning eliminates branches that cannot affect the final decision, reducing the effective branching factor from b to approximately √b in the best case — effectively doubling the achievable search depth for the same computation compared to plain minimax.

== Heuristic Convention ==

State.heuristic(s) is positive when the player who just moved to reach s is doing well. This is consistent with the convention used throughout Gambit. At leaf nodes (terminal or depth limit reached), the raw heuristic is returned. The maximizing/minimizing logic correctly accounts for this convention.

== Move Ordering ==

Moves are ordered by heuristic before recursing — highest first for the maximizing player, lowest first for the minimizing player. This shallow search ordering maximises the probability of early pruning, approaching the theoretical best-case performance of alpha-beta.

== Mutable State ==

Alpha and beta bounds are tracked with mutable variables, and boundary/break is used for early termination on a prune. This is intentional: alpha-beta pruning is fundamentally about early exit based on accumulated bounds, and fighting that with purely functional constructs would sacrifice both clarity and performance. Mutable state is appropriate here for the same reason it is appropriate in high-performance sorting algorithms.

Type parameters

M

the move type.

P

the proto-state type.

Pl

the player identity type.

S

the state type.

Value parameters

depth

search depth in plies (half-moves). Default 6. Higher values give stronger play at the cost of exponentially more computation.

game

implicit Game[S, M, Pl] for move generation and application.

me

this player's identity.

state

implicit State[P, S] for goal detection and heuristic.

Attributes

Graph
Supertypes
trait Player[S, M, Pl]
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

override def chooseMove(s: S, random: Random): Option[M]

Choose a move from the given state. Returns None if no move is available (terminal position).

Choose a move from the given state. Returns None if no move is available (terminal position).

Value parameters

random

a Random instance.

s

the current state.

Attributes

Returns

Some(move) or None.

Definition Classes

Inherited methods

def gameOver(result: GameResult[Pl], me: Pl): Unit

Called at the end of a game with the full result and this player's identity. Default implementation is a no-op. Override to implement learning or logging.

Called at the end of a game with the full result and this player's identity. Default implementation is a no-op. Override to implement learning or logging.

Value parameters

me

this player's identity, used to extract the relevant score.

result

the game result (all players' scores).

Attributes

Inherited from:
Player