EXPERIENCE-WEIGHTED ATTRACTION LEARNING IN NORMAL FORM GAMES

EXPERIENCE-WEIGHTED ATTRACTION LEARNING IN NORMAL FORM GAMES

July, 1999 | COLIN CAMERER AND TECK-HUA HO
Experience-weighted attraction (EWA) learning is a model that combines elements of reinforcement learning and belief learning. It uses parameters like δ, φ, and ρ to weight past experiences and update strategy choices. δ measures the weight given to hypothetical payoffs of unchosen strategies, while φ and ρ discount past attractions and experience. EWA includes reinforcement learning and weighted fictitious play as special cases. Parameter estimates from three experimental data sets show δ around 0.5, φ around 0.8–1, and ρ varying from 0 to φ. EWA outperforms both reinforcement and belief learning models in most cases, though belief models perform better in some constant-sum games. EWA allows flexible growth of attractions and substantial reinforcement of unchosen strategies, combining the best features of previous approaches. The model uses logit probabilities to determine choice probabilities, with λ measuring sensitivity to attractions. Parameters like δ, φ, and ρ control learning dynamics, with δ reflecting the weight on foregone payoffs, φ depreciating past attractions, and ρ depreciating experience. Initial attractions and experience weight (N(0)) influence how initial beliefs and experiences affect strategy choices. EWA allows for a more nuanced understanding of learning by incorporating both reinforcement and belief elements, and it provides a framework for comparing different learning models. The model's parameters have clear psychological interpretations and improve statistical fit and predictive accuracy. EWA is able to capture the dynamics of learning in games by combining the best features of reinforcement and belief learning.Experience-weighted attraction (EWA) learning is a model that combines elements of reinforcement learning and belief learning. It uses parameters like δ, φ, and ρ to weight past experiences and update strategy choices. δ measures the weight given to hypothetical payoffs of unchosen strategies, while φ and ρ discount past attractions and experience. EWA includes reinforcement learning and weighted fictitious play as special cases. Parameter estimates from three experimental data sets show δ around 0.5, φ around 0.8–1, and ρ varying from 0 to φ. EWA outperforms both reinforcement and belief learning models in most cases, though belief models perform better in some constant-sum games. EWA allows flexible growth of attractions and substantial reinforcement of unchosen strategies, combining the best features of previous approaches. The model uses logit probabilities to determine choice probabilities, with λ measuring sensitivity to attractions. Parameters like δ, φ, and ρ control learning dynamics, with δ reflecting the weight on foregone payoffs, φ depreciating past attractions, and ρ depreciating experience. Initial attractions and experience weight (N(0)) influence how initial beliefs and experiences affect strategy choices. EWA allows for a more nuanced understanding of learning by incorporating both reinforcement and belief elements, and it provides a framework for comparing different learning models. The model's parameters have clear psychological interpretations and improve statistical fit and predictive accuracy. EWA is able to capture the dynamics of learning in games by combining the best features of reinforcement and belief learning.
Reach us at info@study.space
[slides] Experience%E2%80%90weighted Attraction Learning in Normal Form Games | StudySpace