markov process and dynamic programming
Show older comments
Im relatively new in Matlab, and im having some problems when using finite horizon dynamic programming while using 2 state variables,one of which follows a Markov process. This is as far as ive gone:
r=0.04;
B=100/106;
T=60;
delta=2;
na=150;
ny=2;
b=-1; %
amin=b;
amax=5;
rho=0.95;
sigma=0.5; %desv std del proceso
sizes=1;
Pi=Tauchen(rho,sigma,ny,sizes); % I worked this on some other file
pi=Pi(1:ny,:);
yg=exp(Pi(ny+1,:));
ag=linspace(amin,amax,na);
lag=linspace(log(amin+1),log(amax+1),na)';
ag=exp(lag)-1;
%Up to this part, its ok
V=zeros(na,ny,T); %
VT1=zeros(na,ny,T);
Vms=zeros(na,1,T);
G=zeros(na,ny,T);
for t=1:T
for m=1:na
for s=1:ny
for j=1:na
if yg(s)+(1+r)*ag(m)-ag(j)<0;
Vms(j,1,t)=-1e10;
else
Vms(j,1,t)=((yg(s)+(1+r)*ag(m)-ag(j))^(1-delta)-1)/(1-delta)+ B*pi(s,:)*VT1(j,:,t)';
end
end
[h_vf h_pf]=max(Vms);
V(m,s,t)=h_vf;
G(m,s,t)=ag(h_pf);
end
end
VT1=V;
end
it seems matrix sizes dont match, but im not sure if this is the way to solve a finite horizon problem. with a infinite horizon case, i used a "while"(instead of looping the time variable) until it converged and it worked just fine. Id really appreciate it if someone shed some light on this!
Answers (0)
Categories
Find more on Markov Chain Models in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!