Select Location
Select Condition
Select Location Type
From $47.07

Original: $134.49

-65%
Prosocial Behavior in Games

$134.49

$47.07

The Story

Game theory often prescribes actions that people cannot perform. This book identifies where bounded agents can learn their way to equilibrium--and where they cannot.

Equilibrium concepts assume agents with no cognitive limits and complete knowledge of their strategic circumstances. When these conditions fail, equilibrium reasoning collapses, leaving agents without normative principles for their strategic situations. Existing approaches to bounded rationality--ecological rationality, epistemic game theory, evolutionary models, the learning-in-games tradition, and procedural rationality--provide no unified criteria for evaluating how bounded agents should adapt to uncertain, dynamic environments.

Ashton T. Sperry-Taylor introduces strategic bandits, a framework that closes this normative gap. It models opponent behavior as Memory-m rules--behavioral patterns that players discover through repeated observation, not equilibrium reasoning. Decision policies provide rigorous regret bounds that yield determinate, testable predictions about learning dynamics: not what bounded players should believe about opponents, but what they can discover through experience. Strategic bandits identify two independent boundaries for action-level learning: the strategic boundary, where myopic and strategic optimization diverge, and the estimation boundary, where learning dynamics prevent the identification of even the myopic optimum.

Sperry-Taylor examines five canonical games: the Battle of the Sexes, the Centipede Game, Divide the Cake, the Prisoner's Dilemma, and the Stag Hunt. Each game reveals distinct features of learning under bounded rationality: unconditional convergence in dominance-solvable games, coordination failure driven by estimation dynamics, and the contingent efficiency of robust versus adaptive algorithms.

Description

Game theory often prescribes actions that people cannot perform. This book identifies where bounded agents can learn their way to equilibrium--and where they cannot.

Equilibrium concepts assume agents with no cognitive limits and complete knowledge of their strategic circumstances. When these conditions fail, equilibrium reasoning collapses, leaving agents without normative principles for their strategic situations. Existing approaches to bounded rationality--ecological rationality, epistemic game theory, evolutionary models, the learning-in-games tradition, and procedural rationality--provide no unified criteria for evaluating how bounded agents should adapt to uncertain, dynamic environments.

Ashton T. Sperry-Taylor introduces strategic bandits, a framework that closes this normative gap. It models opponent behavior as Memory-m rules--behavioral patterns that players discover through repeated observation, not equilibrium reasoning. Decision policies provide rigorous regret bounds that yield determinate, testable predictions about learning dynamics: not what bounded players should believe about opponents, but what they can discover through experience. Strategic bandits identify two independent boundaries for action-level learning: the strategic boundary, where myopic and strategic optimization diverge, and the estimation boundary, where learning dynamics prevent the identification of even the myopic optimum.

Sperry-Taylor examines five canonical games: the Battle of the Sexes, the Centipede Game, Divide the Cake, the Prisoner's Dilemma, and the Stag Hunt. Each game reveals distinct features of learning under bounded rationality: unconditional convergence in dominance-solvable games, coordination failure driven by estimation dynamics, and the contingent efficiency of robust versus adaptive algorithms.

Prosocial Behavior in Games | World of Books