diff --git a/lectures/commod_price.md b/lectures/commod_price.md index 94bf0015a..f2f73655a 100644 --- a/lectures/commod_price.md +++ b/lectures/commod_price.md @@ -13,7 +13,7 @@ For this lecture we need the `yfinance` library. ```{code-cell} ipython3 :tags: [hide-output] -!pip install yfinance +!pip install yfinance ``` @@ -55,10 +55,10 @@ In general, prices depend on the choices and actions of suppliers, consumers and speculators. Our focus will be on the interaction between these parties, viewed through the -lens of the standard competitive storage model +lens of the standard competitive storage model This model was developed by -{cite}`samuelson1971stochastic`, +{cite}`samuelson1971stochastic`, {cite}`wright1982economic`, {cite}`scheinkman1983simple`, {cite}`deaton1992on`, {cite}`deaton1996competitive`, and {cite}`chambers1996theory`. @@ -81,7 +81,7 @@ Supply is exogenous, depending on "harvests". These days, goods such as basic computer chips and integrated circuits are often treated as commodities in financial markets, being highly standardized, and, for these kinds of commodities, the word "harvest" is not -appropriate. +appropriate. Nonetheless, we maintain it for simplicity. ``` @@ -97,7 +97,6 @@ We begin with the following imports ```{code-cell} ipython3 import numpy as np import matplotlib.pyplot as plt -from numba import jit, vectorize from scipy.interpolate import interp1d from scipy.optimize import minimize_scalar, brentq from scipy.stats import beta @@ -107,25 +106,25 @@ from scipy.stats import beta ## The Model Consider a market for a single commodity, whose price is given at $t$ by -$p_t$. +$p_t$. The harvest of the commodity at time $t$ is $Z_t$. We assume that the sequence $\{ Z_t \}_{t \geq 1}$ is IID with common -density function $\phi$. +density function $\phi$. The harvests take values in $S$, a subset of the nonnegative numbers. Speculators can store the commodity between periods, with $I_t$ units purchased in the current period yielding $\alpha I_t$ units in the next. -Here $\alpha \in (0,1)$ is a depreciation rate for the commodity. +Here $\alpha \in (0,1)$ is a depreciation rate for the commodity. For simplicity, the risk free interest rate is taken to be -zero, so expected profit on purchasing $I_t$ units is +zero, so expected profit on purchasing $I_t$ units is $$ - \mathbb{E}_t \, p_{t+1} \cdot \alpha I_t - p_t I_t + \mathbb{E}_t \, p_{t+1} \cdot \alpha I_t - p_t I_t = (\alpha \mathbb{E}_t \, p_{t+1} - p_t) I_t $$ @@ -162,7 +161,7 @@ $$ $$ (eq:pmco) -We also require that the market clears in each period. +We also require that the market clears in each period. We assume that consumers generate demand quantity $D(p)$ corresponding to price $p$. @@ -173,14 +172,14 @@ Let $P := D^{-1}$ be the inverse demand function. Regarding quantities, * supply is the sum of carryover by speculators and the current harvest -* demand is the sum of purchases by consumers and purchases by speculators. +* demand is the sum of purchases by consumers and purchases by speculators. Mathematically, -* supply $ = X_t = \alpha I_{t-1} + Z_t$ -* demand $ = D(p_t) + I_t$ +* supply $ = X_t = \alpha I_{t-1} + Z_t$ +* demand $ = D(p_t) + I_t$ -Thus, the market equilibrium condition is +Thus, the market equilibrium condition is $$ \alpha I_{t-1} + Z_t = D(p_t) + I_t @@ -194,14 +193,14 @@ The initial condition $X_0 \in S$ is treated as given. ### An equilibrium function -Now to find an equilibrium? +Now to find an equilibrium? Our path of attack will be to seek a system of prices that depend only on the current state. -In other words, we take a function $p$ on $S$ and set $p_t = p(X_t)$ for every $t$. +In other words, we take a function $p$ on $S$ and set $p_t = p(X_t)$ for every $t$. -Prices and quantities then follow +Prices and quantities then follow $$ p_t = p(X_t), \quad I_t = X_t - D(p_t), \quad X_{t+1} = \alpha I_t + Z_{t+1} @@ -218,7 +217,7 @@ To this end, suppose that there exists a function $p^*$ on $S$ satisfying $$ - p^*(x) = \max + p^*(x) = \max \left\{ \alpha \int_0^\infty p^*(\alpha I(x) + z) \phi(z)dz, P(x) \right\} @@ -233,14 +232,14 @@ $$ $$ (eq:einvf) It turns out that such a $p^*$ will suffice, in the sense that [](eq:arbi) -and [](eq:pmco) hold for the corresponding system [](eq:eosy). +and [](eq:pmco) hold for the corresponding system [](eq:eosy). -To see this, observe first that +To see this, observe first that $$ \mathbb{E}_t \, p_{t+1} - = \mathbb{E}_t \, p^*(X_{t+1}) - = \mathbb{E}_t \, p^*(\alpha I(X_t) + Z_{t+1}) + = \mathbb{E}_t \, p^*(X_{t+1}) + = \mathbb{E}_t \, p^*(\alpha I(X_t) + Z_{t+1}) = \int_0^\infty p^*(\alpha I(X_t) + z) \phi(z)dz $$ @@ -249,10 +248,10 @@ Thus [](eq:arbi) requires that $$ \alpha \int_0^\infty p^*(\alpha I(X_t) + z) \phi(z)dz \leq p^*(X_t) $$ - -This inequality is immediate from [](eq:dopf). -Second, regarding [](eq:pmco), suppose that +This inequality is immediate from [](eq:dopf). + +Second, regarding [](eq:pmco), suppose that $$ \alpha \int_0^\infty p^*(\alpha I(X_t) + z) \phi(z)dz < p^*(X_t) @@ -260,9 +259,9 @@ $$ Then by [](eq:dopf) we have $p^*(X_t) = P(X_t)$ -But then $D(p^*(X_t)) = X_t$ and $I_t = I(X_t) = 0$. +But then $D(p^*(X_t)) = X_t$ and $I_t = I(X_t) = 0$. -As a consequence, both [](eq:arbi) and [](eq:pmco) hold. +As a consequence, both [](eq:arbi) and [](eq:pmco) hold. We have found an equilibrium. @@ -293,16 +292,16 @@ To implement our update step, it is helpful if we put [](eq:dopf) and This leads us to the update rule $$ - p_{k+1}(x) = \max + p_{k+1}(x) = \max \left\{ \alpha \int_0^\infty p_k(\alpha ( x - D(p_{k+1}(x))) + z) \phi(z)dz, P(x) \right\} $$ (eq:dopf2) -In other words, we take $p_k$ as given and, at each $x$, solve for $q$ in +In other words, we take $p_k$ as given and, at each $x$, solve for $q$ in $$ - q = \max + q = \max \left\{ \alpha \int_0^\infty p_k(\alpha ( x - D(q)) + z) \phi(z)dz, P(x) \right\} @@ -313,7 +312,7 @@ points $x_1, \ldots, x_n$. Then we get the corresponding values $q_1, \ldots, q_n$. -Then we compute $p_{k+1}$ as the linear interpolation of +Then we compute $p_{k+1}$ as the linear interpolation of the values $q_1, \ldots, q_n$ over the grid $x_1, \ldots, x_n$. Then we repeat, seeking convergence. @@ -330,7 +329,7 @@ The integral in [](eq:dopf3) is computed via Monte Carlo. ```{code-cell} ipython3 -α, a, c = 0.8, 1.0, 2.0 +α, a, c = 0.8, 1.0, 2.0 beta_a, beta_b = 5, 5 mc_draw_size = 250 gridsize = 150 @@ -339,7 +338,7 @@ grid = np.linspace(a, grid_max, gridsize) beta_dist = beta(5, 5) Z = a + beta_dist.rvs(mc_draw_size) * c # Shock observations -D = P = lambda x: 1.0 / x +D = P = lambda x: 1.0 / x tol = 1e-4 @@ -348,8 +347,8 @@ def T(p_array): new_p = np.empty_like(p_array) # Interpolate to obtain p as a function. - p = interp1d(grid, - p_array, + p = interp1d(grid, + p_array, fill_value=(p_array[0], p_array[-1]), bounds_error=False) @@ -388,8 +387,8 @@ prices. ```{code-cell} ipython3 # Turn the price array into a price function -p_star = interp1d(grid, - price, +p_star = interp1d(grid, + price, fill_value=(price[0], price[-1]), bounds_error=False) diff --git a/lectures/markov_chains_I.md b/lectures/markov_chains_I.md index d3292ca58..6db19fc53 100644 --- a/lectures/markov_chains_I.md +++ b/lectures/markov_chains_I.md @@ -11,7 +11,7 @@ kernelspec: name: python3 --- -+++ {"user_expressions": []} + # Markov Chains: Basic Concepts and Stationarity @@ -36,10 +36,10 @@ In addition to what's in Anaconda, this lecture will need the following librarie :class: warning If you are running this lecture locally it requires [graphviz](https://www.graphviz.org) to be installed on your computer. Installation instructions for graphviz can be found -[here](https://www.graphviz.org/download/) +[here](https://www.graphviz.org/download/) ``` -+++ {"user_expressions": []} + ## Overview @@ -74,7 +74,7 @@ import matplotlib as mpl from itertools import cycle ``` -+++ {"user_expressions": []} + ## Definitions and examples @@ -137,7 +137,7 @@ dot.edge("sr", "sr", label="0.492") dot ``` -+++ {"user_expressions": []} + Here there are three **states** @@ -180,11 +180,11 @@ We can collect all of these conditional probabilities into a matrix, as follows $$ P = -\begin{bmatrix} +\begin{bmatrix} 0.971 & 0.029 & 0 \\ 0.145 & 0.778 & 0.077 \\ 0 & 0.508 & 0.492 -\end{bmatrix} +\end{bmatrix} $$ Notice that $P$ is a stochastic matrix. @@ -220,11 +220,11 @@ Given the above information, we can write out the transition probabilities in ma ```{math} :label: p_unempemp -P = -\begin{bmatrix} +P = +\begin{bmatrix} 1 - \alpha & \alpha \\ \beta & 1 - \beta -\end{bmatrix} +\end{bmatrix} ``` For example, @@ -255,11 +255,11 @@ We'll cover some of these applications below. (mc_eg3)= #### Example 3 -Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy (D), autocracy (A), and an intermediate state called anocracy (N). +Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy (D), autocracy (A), and an intermediate state called anocracy (N). -Each institution can have two potential development regimes: collapse (C) and growth (G). This results in six possible states: DG, DC, NG, NC, AG, and AC. +Each institution can have two potential development regimes: collapse (C) and growth (G). This results in six possible states: DG, DC, NG, NC, AG, and AC. -The lower probability of transitioning from NC to itself indicates that collapses in anocracies quickly evolve into changes in the political institution. +The lower probability of transitioning from NC to itself indicates that collapses in anocracies quickly evolve into changes in the political institution. Democracies tend to have longer-lasting growth regimes compared to autocracies as indicated by the lower probability of transitioning from growth to growth in autocracies. @@ -267,14 +267,14 @@ We can also find a higher probability from collapse to growth in democratic regi $$ P := -\begin{bmatrix} +\begin{bmatrix} 0.86 & 0.11 & 0.03 & 0.00 & 0.00 & 0.00 \\ 0.52 & 0.33 & 0.13 & 0.02 & 0.00 & 0.00 \\ 0.12 & 0.03 & 0.70 & 0.11 & 0.03 & 0.01 \\ 0.13 & 0.02 & 0.35 & 0.36 & 0.10 & 0.04 \\ 0.00 & 0.00 & 0.09 & 0.11 & 0.55 & 0.25 \\ 0.00 & 0.00 & 0.09 & 0.15 & 0.26 & 0.50 -\end{bmatrix} +\end{bmatrix} $$ ```{code-cell} ipython3 @@ -297,7 +297,7 @@ for start_idx, node_start in enumerate(nodes): value = P[start_idx][end_idx] if value != 0: G.add_edge(node_start,node_end, weight=value, len=100) - + pos = nx.spring_layout(G, seed=10) fig, ax = plt.subplots() nx.draw_networkx_nodes(G, pos, node_size=600, edgecolors='black', node_color='white') @@ -327,7 +327,7 @@ The set $S$ is called the **state space** and $x_1, \ldots, x_n$ are the **state A **distribution** $\psi$ on $S$ is a probability mass function of length $n$, where $\psi(i)$ is the amount of probability allocated to state $x_i$. -A **Markov chain** $\{X_t\}$ on $S$ is a sequence of random variables taking values in $S$ +A **Markov chain** $\{X_t\}$ on $S$ is a sequence of random variables taking values in $S$ that have the **Markov property**. This means that, for any date $t$ and any state $y \in S$, @@ -390,9 +390,9 @@ In these exercises, we'll take the state space to be $S = 0,\ldots, n-1$. ### Writing our own simulation code -To simulate a Markov chain, we need +To simulate a Markov chain, we need -1. a stochastic matrix $P$ and +1. a stochastic matrix $P$ and 1. a probability mass function $\psi_0$ of length $n$ from which to draw a initial realization of $X_0$. The Markov chain is then constructed as follows: @@ -416,7 +416,7 @@ cdf = np.cumsum(ψ_0) # convert into cumulative distribution qe.random.draw(cdf, 5) # generate 5 independent draws from ψ ``` -+++ {"user_expressions": []} + We'll write our code as a function that accepts the following three arguments @@ -449,7 +449,7 @@ def mc_sample_path(P, ψ_0=None, ts_length=1_000): return X ``` -+++ {"user_expressions": []} + Let's see how it works using the small matrix @@ -458,7 +458,7 @@ P = [[0.4, 0.6], [0.2, 0.8]] ``` -+++ {"user_expressions": []} + Here's a short time series. @@ -466,7 +466,7 @@ Here's a short time series. mc_sample_path(P, ψ_0=[1.0, 0.0], ts_length=10) ``` -+++ {"user_expressions": []} + It can be shown that for a long series drawn from `P`, the fraction of the sample that takes value 0 will be about 0.25. @@ -483,7 +483,7 @@ X = mc_sample_path(P, ψ_0=[0.1, 0.9], ts_length=1_000_000) np.mean(X == 0) ``` -+++ {"user_expressions": []} + You can try changing the initial distribution to confirm that the output is always close to 0.25 (for the `P` matrix above). @@ -501,7 +501,7 @@ X = mc.simulate(ts_length=1_000_000) np.mean(X == 0) ``` -+++ {"user_expressions": []} + The `simulate` routine is faster (because it is [JIT compiled](https://python-programming.quantecon.org/numba.html#numba-link)). @@ -513,7 +513,7 @@ The `simulate` routine is faster (because it is [JIT compiled](https://python-pr %time mc.simulate(ts_length=1_000_000) # qe code version ``` -+++ {"user_expressions": []} + #### Adding state values and initial conditions @@ -536,7 +536,7 @@ mc.simulate(ts_length=4, init='unemployed') mc.simulate(ts_length=4) # Start at randomly chosen initial state ``` -+++ {"user_expressions": []} + If we want to see indices rather than state values as outputs as we can use @@ -544,7 +544,7 @@ If we want to see indices rather than state values as outputs as we can use mc.simulate_indices(ts_length=4) ``` -+++ {"user_expressions": []} + (mc_md)= ## Distributions over time @@ -621,7 +621,7 @@ Hence the following is also valid. X_t \sim \psi_t \quad \implies \quad X_{t+m} \sim \psi_t P^m ``` -+++ {"user_expressions": []} + (finite_mc_mstp)= ### Multiple step transition probabilities @@ -657,20 +657,20 @@ Suppose that the current state is unknown --- perhaps statistics are available o We guess that the probability that the economy is in state $x$ is $\psi_t(x)$ at time t. -The probability of being in recession (either mild or severe) in 6 months time is given by +The probability of being in recession (either mild or severe) in 6 months time is given by $$ (\psi_t P^6)(1) + (\psi_t P^6)(2) $$ -+++ {"user_expressions": []} + (mc_eg1-1)= ### Example 2: Cross-sectional distributions -The distributions we have been studying can be viewed either +The distributions we have been studying can be viewed either -1. as probabilities or +1. as probabilities or 1. as cross-sectional frequencies that a Law of Large Numbers leads us to anticipate for large samples. To illustrate, recall our model of employment/unemployment dynamics for a given worker {ref}`discussed above `. @@ -720,11 +720,11 @@ P = np.array([[0.4, 0.6], ψ @ P ``` -+++ {"user_expressions": []} + Notice that `ψ @ P` is the same as `ψ` -+++ {"user_expressions": []} + Such distributions are called **stationary** or **invariant**. @@ -765,7 +765,7 @@ distribution. We will come back to this when we introduce irreducibility in the next lecture -+++ {"user_expressions": []} + ### Example @@ -822,7 +822,7 @@ $$ See, for example, {cite}`sargent2023economic` Chapter 4. -+++ {"user_expressions": []} + (hamilton)= #### Example: Hamilton's chain @@ -836,7 +836,7 @@ P = np.array([[0.971, 0.029, 0.000], P @ P ``` -+++ {"user_expressions": []} + Let's pick an initial distribution $\psi_0$ and trace out the sequence of distributions $\psi_0 P^t$ for $t = 0, 1, 2, \ldots$ @@ -900,13 +900,13 @@ First, we write a function to draw initial distributions $\psi_0$ of size `num_d def generate_initial_values(num_distributions): n = len(P) ψ_0s = np.empty((num_distributions, n)) - + for i in range(num_distributions): draws = np.random.randint(1, 10_000_000, size=n) # Scale them so that they add up into 1 ψ_0s[i,:] = np.array(draws/sum(draws)) - + return ψ_0s ``` @@ -929,14 +929,14 @@ def plot_distribution(P, ts_length, num_distributions): # Get the path for each starting value for ψ_0 in ψ_0s: ψ_t = iterate_ψ(ψ_0, P, ts_length) - + # Obtain and plot distributions at each state for i in range(n): axes[i].plot(range(0, ts_length), ψ_t[:,i], alpha=0.3) # Add labels for i in range(n): - axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', + axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^*({i})$') axes[i].set_xlabel('t') axes[i].set_ylabel(fr'$\psi_t({i})$') @@ -948,7 +948,7 @@ def plot_distribution(P, ts_length, num_distributions): The following figure shows ```{code-cell} ipython3 -# Define the number of iterations +# Define the number of iterations # and initial distributions ts_length = 50 num_distributions = 25 @@ -962,7 +962,7 @@ plot_distribution(P, ts_length, num_distributions) The convergence to $\psi^*$ holds for different initial distributions. -+++ {"user_expressions": []} + #### Example: Failure of convergence @@ -979,7 +979,7 @@ num_distributions = 30 plot_distribution(P, ts_length, num_distributions) ``` -+++ {"user_expressions": []} + (finite_mc_expec)= ## Computing expectations @@ -1010,8 +1010,8 @@ where algebra, we'll think of as the column vector $$ -h = -\begin{bmatrix} +h = +\begin{bmatrix} h(x_1) \\ \vdots \\ h(x_n) @@ -1058,15 +1058,15 @@ $\sum_t \beta^t h(X_t)$. In view of the preceding discussion, this is $$ -\mathbb{E} +\mathbb{E} \left[ - \sum_{j=0}^\infty \beta^j h(X_{t+j}) \mid X_t + \sum_{j=0}^\infty \beta^j h(X_{t+j}) \mid X_t = x \right] = x + \beta (Ph)(x) + \beta^2 (P^2 h)(x) + \cdots $$ -By the {ref}`Neumann series lemma `, this sum can be calculated using +By the {ref}`Neumann series lemma `, this sum can be calculated using $$ I + \beta P + \beta^2 P^2 + \cdots = (I - \beta P)^{-1} @@ -1082,11 +1082,11 @@ Imam and Temple {cite}`imampolitical` used a three-state transition matrix to de $$ P := -\begin{bmatrix} +\begin{bmatrix} 0.68 & 0.12 & 0.20 \\ 0.50 & 0.24 & 0.26 \\ 0.36 & 0.18 & 0.46 -\end{bmatrix} +\end{bmatrix} $$ where rows, from top to down, correspond to growth, stagnation, and collapse. diff --git a/lectures/schelling.md b/lectures/schelling.md index c514f5c30..e3941fc3c 100644 --- a/lectures/schelling.md +++ b/lectures/schelling.md @@ -20,7 +20,7 @@ kernelspec: ``` -# Racial Segregation +# Racial Segregation ```{index} single: Schelling Segregation Model ``` @@ -191,30 +191,30 @@ class Agent: b = (self.location[1] - other.location[1])**2 return sqrt(a + b) - def happy(self, + def happy(self, agents, # List of other agents num_neighbors=10, # No. of agents viewed as neighbors require_same_type=5): # How many neighbors must be same type """ True if a sufficient number of nearest neighbors are of the same - type. + type. """ distances = [] - + # Distances is a list of pairs (d, agent), where d is distance from # agent to self for agent in agents: if self != agent: distance = self.get_distance(agent) distances.append((distance, agent)) - + # Sort from smallest to largest, according to distance distances.sort() - + # Extract the neighboring agents neighbors = [agent for d, agent in distances[:num_neighbors]] - + # Count how many neighbors have the same type as self num_same_type = sum(self.type == agent.type for agent in neighbors) return num_same_type >= require_same_type @@ -248,9 +248,9 @@ def plot_distribution(agents, cycle_num): fig, ax = plt.subplots() plot_args = {'markersize': 8, 'alpha': 0.8} ax.set_facecolor('azure') - ax.plot(x_values_0, y_values_0, + ax.plot(x_values_0, y_values_0, 'o', markerfacecolor='orange', **plot_args) - ax.plot(x_values_1, y_values_1, + ax.plot(x_values_1, y_values_1, 'o', markerfacecolor='green', **plot_args) ax.set_title(f'Cycle {cycle_num-1}') plt.show() @@ -274,12 +274,12 @@ The real code is below ```{code-cell} ipython3 def run_simulation(num_of_type_0=600, num_of_type_1=600, - max_iter=100_000, # Maximum number of iterations - set_seed=1234): + max_iter=100_000, # Maximum number of iterations + set_seed=1234): # Set the seed for reproducibility - seed(set_seed) - + seed(set_seed) + # Create a list of agents of type 0 agents = [Agent(0) for i in range(num_of_type_0)] # Append a list of agents of type 1 @@ -287,11 +287,11 @@ def run_simulation(num_of_type_0=600, # Initialize a counter count = 1 - + # Plot the initial distribution plot_distribution(agents, count) - - # Loop until no agent wishes to move + + # Loop until no agent wishes to move while count < max_iter: print('Entering loop ', count) count += 1 @@ -303,7 +303,7 @@ def run_simulation(num_of_type_0=600, no_one_moved = False if no_one_moved: break - + # Plot final distribution plot_distribution(agents, count) @@ -311,7 +311,7 @@ def run_simulation(num_of_type_0=600, print(f'Converged after {count} iterations.') else: print('Hit iteration bound and terminated.') - + ``` Let's have a look at the results. @@ -346,7 +346,7 @@ The object oriented style that we used for coding above is neat but harder to optimize than procedural code (i.e., code based around functions rather than objects and methods). -Try writing a new version of the model that stores +Try writing a new version of the model that stores * the locations of all agents as a 2D NumPy array of floats. * the types of all agents as a flat NumPy array of integers. @@ -375,7 +375,6 @@ solution here ```{code-cell} ipython3 from numpy.random import uniform, randint -from numba import njit n = 1000 # number of agents (agents = 0, ..., n-1) k = 10 # number of agents regarded as neighbors @@ -386,13 +385,10 @@ def initialize_state(): types = randint(0, high=2, size=n) # label zero or one return locations, types -@njit # Use Numba to accelerate computation + def compute_distances_from_loc(loc, locations): - " Compute distance from location loc to all other points. " - distances = np.empty(n) - for j in range(n): - distances[j] = np.linalg.norm(loc - locations[j, :]) - return distances + """ Compute distance from location loc to all other points. """ + return np.linalg.norm(loc - locations, axis=1) def get_neighbors(loc, locations): " Get all neighbors of a given location. " @@ -417,7 +413,7 @@ def count_happy(locations, types): for i in range(n): happy_sum += is_happy(i, locations, types) return happy_sum - + def update_agent(i, locations, types): " Move agent if unhappy. " moved = False @@ -432,11 +428,11 @@ def plot_distribution(locations, types, title, savepdf=False): colors = 'orange', 'green' for agent_type, color in zip((0, 1), colors): idx = (types == agent_type) - ax.plot(locations[idx, 0], - locations[idx, 1], - 'o', + ax.plot(locations[idx, 0], + locations[idx, 1], + 'o', markersize=8, - markerfacecolor=color, + markerfacecolor=color, alpha=0.8) ax.set_title(title) plt.show() @@ -458,7 +454,7 @@ def sim_random_select(max_iter=100_000, flip_prob=0.01, test_freq=10_000): i = randint(0, n) moved = update_agent(i, locations, types) - if flip_prob > 0: + if flip_prob > 0: # flip agent i's type with probability epsilon U = uniform() if U < flip_prob: @@ -466,7 +462,7 @@ def sim_random_select(max_iter=100_000, flip_prob=0.01, test_freq=10_000): types[i] = 0 if current_type == 1 else 1 # Every so many updates, plot and test for convergence - if current_iter % test_freq == 0: + if current_iter % test_freq == 0: cycle = current_iter / n plot_distribution(locations, types, f'iteration {current_iter}') if count_happy(locations, types) == n: