This work introduces a unified framework for analyzing games in
greater depth. In the existing literature, players’ strategies are
typically assigned scalar values, and equilibrium concepts are
used to identify compatible choices. However, this approach
neglects the internal structure of players, thereby failing to
accurately model observed behaviors.
To address this limitation, we propose an abstract definition of a
player, consistent with constructions in reinforcement learning.
Instead of defining games as external settings, our framework
defines them in terms of the players themselves. This offers a
language that enables a deeper connection between games and
learning. To illustrate the need for this generality, we study a
simple two-player game and show that even in basic settings, a
sophisticated player may adopt dynamic strategies that cannot be
captured by simpler models or compatibility analysis.
For a general definition of a player, we discuss natural
conditions on its components and define competition through their
behavior. In the discrete setting, we consider players whose
estimates largely follow the standard framework from the
literature. We explore connections to correlated equilibrium and
highlight that dynamic programming naturally applies to all
estimates. In the mean-field setting, we exploit symmetry to
construct explicit examples of equilibria. Finally, we conclude by
examining relations to reinforcement learning.