Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

encapsulated Markov-varying parameters in an object #639

Closed
sbenthall opened this issue Apr 21, 2020 · 4 comments
Closed

encapsulated Markov-varying parameters in an object #639

sbenthall opened this issue Apr 21, 2020 · 4 comments
Assignees
Milestone

Comments

@sbenthall
Copy link
Contributor

sbenthall commented Apr 21, 2020

There seems to be consensus around the idea that it makes sense to represent continuous #611 and discrete #519 distributions with objects. This has been partially implemented in current code.

Currently, when a parameter varies in a self-contained Markov process that's otherwise exogenous to the model, this is coded with a MrkvArray input and lists for the corresponding parameter values:

https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsPortfolioModel.py#L707-L712

This functionality could be wrapped up into an object.

This is what Dolo does:

https://github.com/EconForge/dolo/blob/7a12eb7117f80d95bd36955328329d6573b0c23c/dolo/numeric/processes.py#L229

@sbenthall sbenthall added this to the 1.0.0 milestone Apr 21, 2020
@sbenthall sbenthall self-assigned this Jan 5, 2021
sbenthall added a commit to sbenthall/HARK that referenced this issue Jan 5, 2021
sbenthall added a commit to sbenthall/HARK that referenced this issue Jan 5, 2021
sbenthall added a commit that referenced this issue Jan 7, 2021
MarkovProcess object in distribution.py, towards #639
@sbenthall
Copy link
Contributor Author

Integrate this object into library...

@sbenthall
Copy link
Contributor Author

To integrate this object into the library fully, it requires an expectation-taking function analogous to calcExpectations, see #625

@sbenthall
Copy link
Contributor Author

sbenthall commented Jan 20, 2021

Places to use this object:

https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsAggShockModel.py#L965-L980

cutoffs = np.cumsum(self.MrkvArray[self.MrkvNow, :])
MrkvDraw = Uniform(seed=self.RNG.randint(0, 2 ** 31 - 1)).draw(N=1)
self.MrkvNow = np.searchsorted(cutoffs, MrkvDraw)

maybe this?

draws = Uniform(seed=loops).draw(N=self.act_T_orig)
for s in range(draws.size): # Add act_T_orig more periods
MrkvNow_hist[t] = MrkvNow
MrkvNow = np.searchsorted(cutoffs[MrkvNow, :], draws[s])
t += 1

@sbenthall
Copy link
Contributor Author

Related:

def initializeSim(self):
self.shocks["MrkvNow"] = np.zeros(self.AgentCount, dtype=int)
IndShockConsumerType.initializeSim(self)
if (
self.global_markov
): # Need to initialize markov state to be the same for all agents
base_draw = Uniform(seed=self.RNG.randint(0, 2 ** 31 - 1)).draw(1)
Cutoffs = np.cumsum(np.array(self.MrkvPrbsInit))
self.shocks["MrkvNow"] = np.ones(self.AgentCount) * np.searchsorted(
Cutoffs, base_draw
).astype(int)
self.shocks["MrkvNow"] = self.shocks["MrkvNow"].astype(int)

if (
not self.global_markov
): # Markov state is not changed if it is set at the global level
N = np.sum(which_agents)
base_draws = Uniform(seed=self.RNG.randint(0, 2 ** 31 - 1)).draw(N)
Cutoffs = np.cumsum(np.array(self.MrkvPrbsInit))
self.shocks["MrkvNow"][which_agents] = np.searchsorted(
Cutoffs, base_draws
).astype(int)

# Draw random numbers that will be used to determine the next Markov state
if self.global_markov:
base_draws = np.ones(self.AgentCount) * Uniform(
seed=self.RNG.randint(0, 2 ** 31 - 1)
).draw(1)
else:
base_draws = Uniform(seed=self.RNG.randint(0, 2 ** 31 - 1)).draw(
self.AgentCount
)
dont_change = (
self.t_age == 0
) # Don't change Markov state for those who were just born (unless global_markov)
if self.t_sim == 0: # Respect initial distribution of Markov states
dont_change[:] = True
# Determine which agents are in which states right now
J = self.MrkvArray[0].shape[0]
MrkvPrev = self.shocks["MrkvNow"]
MrkvNow = np.zeros(self.AgentCount, dtype=int)
MrkvBoolArray = np.zeros((J, self.AgentCount))
for j in range(J):
MrkvBoolArray[j, :] = MrkvPrev == j
# Draw new Markov states for each agent
for t in range(self.T_cycle):
Cutoffs = np.cumsum(self.MrkvArray[t], axis=1)
right_age = self.t_cycle == t
for j in range(J):
these = np.logical_and(right_age, MrkvBoolArray[j, :])
MrkvNow[these] = np.searchsorted(
Cutoffs[j, :], base_draws[these]
).astype(int)
if not self.global_markov:
MrkvNow[dont_change] = MrkvPrev[dont_change]
self.shocks["MrkvNow"] = MrkvNow.astype(int)

sbenthall added a commit that referenced this issue Jan 21, 2021
use MarkovProcess in consumption saving models, fixes #639
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant