Find approximate Nash equilibrium using fictitious play learning dynamics. Each player best-responds to the empirical frequency of opponent’s past play. Converges to Nash equilibrium for certain game classes (e.g., zero-sum games, potential games). Returns the limiting mixed strategy profile after specified iterations. Useful for learning dynamics and evolutionary game theory.
API key for authentication. Get your key at https://finceptbackend.share.zrok.io/auth/register
Payoff matrix for Player 1
[[3, 0], [5, 1]]Payoff matrix for Player 2
[[3, 5], [0, 1]]Number of iterations to run
x >= 11000
Random seed for reproducibility (null for random)
42