Exp Fit Error: Error using fit>iFit (line 340) NaN computed by model function, fitting cannot continue. Try using or tightening upper and lower bounds on coefficients.
43 views (last 30 days)
Show older comments
Hi, the attached data gives the error in the title. I try to fit with (1-a)*exp(-x*b)+a*exp(-x*(c+b)). It fitted in the cftool, but after many changes of starting points only (and then c was negative- must be positive). If anyone can tell me what to do to resolve this I would greatly appreciate it.
0 Comments
Answers (2)
John D'Errico
on 26 Jan 2018
Edited: John D'Errico
on 26 Jan 2018
Here is your model:
y = (1-a)*exp(-x*b)+a*exp(-x*(c+b))
If you require that c be a positive number, then if you write this model in the form
(1-a)*exp(-x*b)+a*exp(-x*d)
we see that you are fitting a mixture (a linear combination) of two exponentials to your data, such that d is greater than b.
So what? I.e., when if d was less than b? Who CARES? You would then be able to swap b and d, after the fit.
My point is, there is no reason to have any inequality constraint at all! The inequality constraints and the imposition on c being positive are meaningless. Just fit the mixture of two exponentials. When the fit is done, if they are in the wrong direction, then swap them, as this model really is completely symmetrical.
Next, look at your data. Sorry, but this is close to being crap for data, in terms of fitting the model you have decided to fit. It could be worse though. ;-)
plot(x,y,'o')
grid on
The noise is fairly high. There is too little data. But even with that, you should get a result that is viable.
g = fittype('(1-a)*exp(-x*b)+a*exp(-x*(c+b))')
g =
General model:
g(a,b,c,x) = (1-a)*exp(-x*b)+a*exp(-x*(c+b))
mdl = fit(x,y,g,'startpoint',[.5 .004 .001])
mdl =
General model:
mdl(x) = (1-a)*exp(-x*b)+a*exp(-x*(c+b))
Coefficients (with 95% confidence bounds):
a = 0.2964 (-0.3669, 0.9597)
b = 0.002533 (-0.000174, 0.005241)
c = 0.01399 (-0.02677, 0.05476)
plot(mdl)
hold on
plot(x,y,'o')
Had c been negative, it would merely mean that I chose a poor set of starting values. But even then, there is no reason why you could not have transformed the model to a completely equivalent one that had c positive!
You got NaNs because you allowed the optimizer to choose a random set of starting values. Computers cannot intelligently look at your data and your model, and know that b and c should be quite small, on the order of 0.002 or so, and that a should be a number somewhere around 0.5.
Ok, lets look more carefully at your data. I claimed it was crap. Hey, don't shoot the messenger. I'm just telling you the truth, but I'm also sure that I'll get someone saying the data looks perfectly good. Lets see why I made that claim.
You have three parameters to estimate. Look at the three data points near x==0. At x==0, your model will generate 1, no matter what the parameters are. So the point at x==0 is completely useless. And the others near zero in x provide essentially no additional information content.
You have ONE point at x==900, and another at x==500. They provide independent information.
Finally, you have another cluster of points in the range [50,200]. A bit of noise in that, but a tight cluster of points gives you little more information than about two points.
So, effectively, you have roughly the equivalent of 4 data points, trying to fit 3 parameters. Having crap for data usually equates to WIDE confidence intervals on the fit. What did you get?
Coefficients (with 95% confidence bounds):
a = 0.2964 (-0.3669, 0.9597)
b = 0.002533 (-0.000174, 0.005241)
c = 0.01399 (-0.02677, 0.05476)
With wide confidence intervals like that, it tells me I might as well be throwing darts at a board to get as good estimates. And my dart throwing ability is legendary only for being as bad as it is.
The power of your data is VERY low when trying to estimate this model. You will get estimates of the parameters, but do I have much confidence in the predicted values? Nope. Ergo, it is indeed crap...
0 Comments
Alex Sha
on 27 Feb 2024
There are two solutions:
1:
Sum Squared Error (SSE): 0.0371625579290083
Root of Mean Square Error (RMSE): 0.0609611006536204
Correlation Coef. (R): 0.979459288572487
R-Square: 0.959340497970921
Parameter Best Estimate
--------- -------------
a 0.296903174691143
b 0.00253167397105703
c 0.0139603055962534
2:
Sum Squared Error (SSE): 0.0371625579290084
Root of Mean Square Error (RMSE): 0.0609611006536204
Correlation Coef. (R): 0.979459288584932
R-Square: 0.959340497995302
Parameter Best Estimate
--------- -------------
a 0.703096838046455
b 0.0164919803565096
c -0.0139603063368952
0 Comments
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!