functional derivative of anonymous function of parameters with known errors

1 view (last 30 days)
Hi all, I have a fairly complicated function of time and several parameters:
f(t;p_1,...,p_k),
each of which has a known error {dp_1,...,dp_k} (which give 95% confidence intervals). I would like to plot this function along with confidence bands derived from these known bands.
Currently, I have my function written like this:
p_1 = 1;
p_2 = 2;
...
p_k = k;
dp_1 = .1;
...
f = @(t) somefunction(t,p_1,...,p_k);
Ideally, I would be able to generate two functions that I could then plot as bands around f, of the following form (D denotes the partial derivative)
f+ = Df/Dp_1 * dp_1 + ... + Df/Dp_k * dp_k;
f- = Df/Dp_1 * (-dp_1) + ... + Df/Dp_k * (-dp_k);
to approximate the functional derivative. Does anybody have suggestions on the best way to calculate these functions?
*******************************************
Alternatively, I know that it's easy to calculate the total derivative of f w/ respect to t (i.e., the non-parameter variable) -- does anybody know of an easy way to establish relatively accurate upper and lower confidence bands on f(t) directly from f'(t) given confidence intervals for its parameters? It seems to me that you can't get around needing some kind of knowledge about partial derivatives, but I'm adding this addendum to the question in case I'm wrong about that.
  2 Comments
John D'Errico
John D'Errico on 28 Sep 2017
1. Evaluate the function at a sequence of times.
2. Plot said values using plot.
WTP? How the errors on your function parameters enter in does not seem to be relevant, since you asked to plot the function.
Ty Easley
Ty Easley on 28 Sep 2017
Sorry, I think you may have seen an incomplete version of this question -- I accidentally submitted it once before I finished writing it.

Sign in to comment.

Accepted Answer

John D'Errico
John D'Errico on 28 Sep 2017
Edited: Walter Roberson on 29 Sep 2017
This is the domain of a branch of statistics known as statistical tolerancing. Thus you have a function of some parameters that have some assumed distributions, then you want to compute the distribution of that function. That the "function" is an ODE is irrelevant. At any point in time, you can theoretically compute the distribution of the function at that point.
However, simple use of the first derivative is often woefully inadequate.
For example, given that x is normally distributed, with mean 0 and variance 1, and a very simple function f(x)=x^2, would the first derivative of f be sufficient to put useful bands around the result? Somehow, I doubt it.
There are many common schemes to solve the general tolerancing problem.
1. Monte Carlo simulation - thus generate random samples for the parameters, then compute the "function" for each. Confidence intervals will be based on percentiles at each time step.
2. First or second order Taylor series approximations. What your question was aimed at originally. This may take more work than you need to do, compared to the alternatives.
3. Modified Taguchi methods. These reduce the problem to an implicit numerical integration. But they make the problem trivial to solve. See this reference for a paper I wrote with Nick Zaino.
Do I need to provide some examples? sigh. let me see...
Trivial is Monte Carlo:
%%Assume that u is normally distributed, with mean 1, standard deviation 0.1.
% Solve the problem
%
% y' = u*y
%
% over the domain [0,1], subject to the initial condition y(0), which
% is also assumed to be normally distributed, with mean of 0.5 &
% standard deviation of 0.12.
%
% So u and y(0) both have SOME value, but we don't know what is the value.
% Only that they are normally distributed with the given mean and variance.
% in fact, other distributions than normal can be allowed, but the solution
% must deal properly with the distribution.
%%Monte Carlo...
nsim = 1000;
y0 = randn(1,nsim)*0.12 + 0.5;
u = randn(1,nsim)*0.1 + 1;
nt = 20;
ysim = zeros(nt,nsim);
yp = @(t,y,U) U*y;
tspan = linspace(0,1,nt)';
for i = 1:nsim
[tpred,ysim(:,i)] = ode45(@(t,y) yp(t,y,u(i)),tspan,y0(i));
end
% compute 5% and 95% percentiles of ysim
plo = 0.05;
phi = 0.95;
ysim = sort(ysim,2);
ylo = ysim(:,floor(plo*nsim));
yhi = ysim(:,floor(phi*nsim));
plot(tspan,[ylo,yhi],'-')
grid on
So the green line is the 95'th percentile, blue is the 5% line.
Remember that the point at t=0 is NOT an independent sample along the curve from the point at t=0.1, etc.
Other methods will produce similar predictions. The difference is that the modified Taguchi methods require solving the differential equation system only a relatively few times, even if a higher order method is employed.
  2 Comments
Ty Easley
Ty Easley on 29 Sep 2017
Edited: Ty Easley on 29 Sep 2017
Thank you for this answer, it's super helpful! I hadn't encountered the term "statistical tolerancing" before, but this makes quite a bit of sense and is something I'll make use of in the future.
I was taught (as an undergraduate physics major, so maybe it's not so surprising) to propagate errors by using first-order expansions and treating error terms like linear functionals, but honestly a Monte Carlo sounds like less work and better results (and I'm unfamiliar with Taguchi methods, but sounds like that might be true for those too?) -- thanks for your time and the answer!
Ty Easley
Ty Easley on 29 Sep 2017
Edited: Ty Easley on 29 Sep 2017
Also, do you have a good rule of thumb for how the complexity of these computations tend to grow with the number of parameters that I have distributions for? If I find that N simulations give me good function bounds over variations in one parameter, is it relatively safe to guess that N*k simulations will give me good bounds over k parameters, assuming the function is roughly as sensitive to variation in each? Or should I expect something more like N^k?

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!