FEAT: Add tools for repeated games#347
Conversation
460c55b to
649ab5b
Compare
649ab5b to
fe1c996
Compare
There was a problem hiding this comment.
I haven't followed the details. Only small comments at the moment.
General comments:
- Try
%prunto detect bottlenecks. - Try
method='interior-point'(scipy/scipy#7123) as alinprogoption, which is available with the latest dev version of scipy. - Consider providing a
linprog_methodoption toouterapproximation, to be passed tolinprog. - Modification of the docstring in
ce_util.pyshould belong to a separate PR. - Did you compare
gridmakewithcartesian?
| best_dev_payoff_i, best_dev_payoff_1, best_dev_payoff_2, initialize_hpl, | ||
| worst_value_i, worst_value_1, worst_value_2, worst_values, RepeatedGame, | ||
| outerapproximation | ||
| ) No newline at end of file |
There was a problem hiding this comment.
I don't think all the routines should be imported. Just import important routines (perhaps RepeatedGame and outerapproximation?).
| @@ -0,0 +1,384 @@ | |||
| """ | |||
| Filename: repeated_game.py | |||
| Author: Quentin Batista | |||
There was a problem hiding this comment.
The author (Chase Coleman) of the original code also should be here.
|
|
||
| class RepeatedGame: | ||
| """ | ||
| Class representing an N-player repeated form game. |
|
|
||
| # Create the unit circle, points, and hyperplane levels | ||
| C, H, Z = initialize_sg_hpl(rpd, nH) | ||
| Cnew = copy.copy(C) |
| warn("Maximum Iteration Reached") | ||
|
|
||
| # Update hyperplane levels | ||
| C = copy.copy(Cnew) |
|
|
||
| # Set iterative parameters and iterate until converged | ||
| itr, dist = 0, 10.0 | ||
| while (itr < maxiter) & (dist > tol): |
| tol_int = int(round(abs(np.log10(tol))) - 1) | ||
|
|
||
| # Find vertices that are unique within tolerance level | ||
| vertices = np.vstack({tuple(row) for row in np.round(vertices, tol_int)}) |
There was a problem hiding this comment.
Are these two blocks really necessary?
There was a problem hiding this comment.
They seem to be -- here is the output I get without them:
array([[ 10.00000001, 3.97266052],
[ 10.00000001, 2.99999998],
[ 2.99999998, 10.00000001],
[ 3.97266052, 10.00000001],
[ 9.00000001, 8.99999999],
[ 8.99999999, 9.00000001],
[ 2.99999998, 3.00000001],
[ 2.99999998, 3. ],
[ 2.99999999, 2.99999999],
[ 2.99999998, 3. ],
[ 3.00000001, 2.99999998],
[ 3. , 2.99999998],
[ 2.99999999, 2.99999999],
[ 3. , 2.99999998],
[ 9.00000001, 9. ],
[ 9.00000001, 9. ],
[ 9. , 9.00000001],
[ 9. , 9.00000001]])
There was a problem hiding this comment.
I wonder why we have these duplications. We should look into the algorithm.
There was a problem hiding this comment.
In numpy version 1.13.1 they have updated the np.unique function to accept an axis argument -- Once this is released, we could do something like:
_, inds = np.unique(np.round(vertices, tol_int), axis=0, return_index=True)
vertices = vertices[inds, :]
or just
vertices = np.unique(np.round(vertices, tol_int), axis=0)
depending on whether we want the returned values to be rounded or not.
|
|
||
|
|
||
| class RGUtil: | ||
| def frange(start, stop, step=1.): |
There was a problem hiding this comment.
What's wrong with using np.linspace?
| x = x0 + i * step | ||
| yield x | ||
|
|
||
| def unitcircle(npts): |
There was a problem hiding this comment.
I would put this in repeated_game.py, as it's very specific to the code there.
| pure_nash_exists = pure_nash_brute(sg) | ||
|
|
||
| if not pure_nash_exists: | ||
| raise ValueError('No pure action Nash equilibrium exists in stage game') |
There was a problem hiding this comment.
No need to compute all the pure Nash equilibria.
Try:
try:
next(pure_nash_brute_gen(sg))
except StopIteration:
raise ValueError('No pure action Nash equilibrium exists in stage game')c7f46e4 to
90646c2
Compare
90646c2 to
a60656a
Compare
This reverts commit b3e720a.
|
Here is the comparison between
|
|
It is interesting that |
|
@oyamad and @QBatista : I refreshed my memories by looking at the implementation of cartesian and gridmake. |
|
Here a small gist to test the claims above, https://gist.github.com/albop/a4e6af9311fe9a9a392462ed757018bb |
|
I haven't had a time to properly review this, but it looks like @oyamad has done a pretty thorough review. The code looks well organized and nicely written in the 10 minutes I spent reading through it. One "Python" vs "Julia" comment that I have is in Julia you write functions that take a type as an argument, but in Python you typically attach these functions to the class itself as methods -- This means it might make sense to have some of these functions (in particular, any of the functions that take the I'm not surprised that this code is a bit slower (this is precisely the type of example where one would expect Julia to perform better). It would be nice to investigate whether this could be sped up a little, but I don't think that is a first order priority for now. |
|
@cc7768 I agree that For speedups we might implement a LP solver (by a simple simplex method) in Numba (as a medium term project). |
a6a5918 to
9f22ddc
Compare
9f22ddc to
0cd2faa
Compare
| (1-delta)*best_dev_payoff_2(rpd, a1) - delta*_w2 | ||
|
|
||
| lpout = linprog(c, A_ub=A, b_ub=b, bounds=(lb, ub)) | ||
| lpout = linprog(c, A_ub=A, b_ub=b, bounds=(lb, ub), method='interior-point') |
There was a problem hiding this comment.
Should be something like:
def outerapproximation(..., linprog_method='simplex'):
...
lpout = linprog(c, A_ub=A, b_ub=b, bounds=(lb, ub), method=linprog_method)(scipy version 1.0.0 has not been released.)
|
Changes Unknown when pulling 48021ff on QBatista:repeated-games into ** on QuantEcon:master**. |
|
@oyamad Here is a visualization of how the algorithm works on a few experiments: https://nbviewer.jupyter.org/github/QBatista/Notebooks/blob/master/JYC_Algo_Visualization.ipynb An important observation is that the quality of the approximation is not weakly increasing with the number of gradients. For this game, choosing 32 gradients seems to give a better approximation than choosing 127 gradients which I suspect is because of the symmetry of payoffs. Additionally, it appears that the approximation is very sensitive to the geometry of the initial guess. |
|
These visualizations are very cool. Nice work @QBatista. I think @thomassargent30 would be quite interested in seeing these visualizations. Is there a reason you think that the value set corresponding to 32 subgradients looks much better than the one with 127? I kind of see what you mean since there are a few extra points between (3, 3) and the other two vertices, but they don't seem far off that line. I agree the solution will have some dependence on the geometry of the initial guess (I think the difference between the set with 127 points and with 128 points actually illustrates this nicely -- The 127 point version doesn't necessarily have (0, 1), (1, 0), (0, -1), and (-1, 0) to work with which seem to be important components of this value set so it has to place more points along what should be the vertical line). This algorithm finds the smallest convex set (for a given set of subgradients!) that contains the fixed point of the B operator. A natural way to investigate this further would be to work on writing up the inner approximation which is also described by JYC. The fixed point of the B operator should lie between the inner and outer approximations -- My suspicion is that there are some very cool graphs you could draw that show the dependence on the initial geometry and how the inner and outer approximations differ for different games/geometries.algorithmically I'm hesitant to be too aggressive with popping vertices. It does seem that not all vertices end up mattering very much, but I suspect it is hard to determine algorithmically which are the ones that we should keep. For example, in this game (-1, 0) and (0, -1) seem to be important vertices. I would be interested in seeing some of the tests you described above though as a proof-of-concept. |
|
@QBatista Animations look very nice. Maybe we need "a better understanding of the manner in which extreme points of the equilibrium payoff set are generated" (Abreu and Sannikov, 2014). We should study Abreu and Sannikov (as we talked). |
|
@oyamad Is your last comment suggesting further development of this PR or a new project? |
|
@mmcky I suggested a new project. (I am afraid the JYC algorithm is too inefficient for pure Python/NumPy.) |
Adds tools for repeated games including the outer hyperplane approximation described by Judd, Yeltekin, Conklin 2002.
The implementation is currently much slower than the one in Julia.
1.532440 seconds (1.17 M allocations: 87.199 MiB, 0.92% gc time)1 loop, best of 3: 1min 8s per loop