Find approximate Nash equilibrium using fictitious play learning dynamics. Each player best-responds to the empirical frequency of opponent’s past play. Converges to Nash equilibrium for certain game classes (e.g., zero-sum games, potential games). Returns the limiting mixed strategy profile after specified iterations. Useful for learning dynamics and evolutionary game theory. [Tier: STANDARD, Credits: 2]
Documentation Index
Fetch the complete documentation index at: https://docs.fincept.in/llms.txt
Use this file to discover all available pages before exploring further.
API key for authentication. Get your key at https://api.fincept.in/auth/register
Payoff matrix for Player 1
[[3, 0], [5, 1]]Payoff matrix for Player 2
[[3, 5], [0, 1]]Number of iterations to run
x >= 11000
Random seed for reproducibility (null for random)
42