How to use implicit model functions in Curve fitting toolbox
Show older comments
I am trying to fit some data on a custom implicit equation (2nd degree polynomial in x and y; see below) and obtain the coefficients (a,b and c) for that data.
But the cftool does not have an implicit option in its custom equation tab.
Any help is appreciated on the topic. I understand that people are coding it in some ways and i would really appriciate if a GUI based answer is also possible.
Accepted Answer
More Answers (2)
An analytical solution is possible here,
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
s=std(A,1);
[~,~,V]=svd(A./s,0);
abc=V(:,end)./s(:);
abc=abc/abc(end);
6 Comments
Bruno Luong
on 12 Nov 2020
This method does not seem to be robust with only part of the data is observed
%This method does not seem to be robust with only part of the data is observed
a=0.2
b=0.2
c=1
s=0.01; % noise std
% simulate some (x,y) data
x=2*rand(1,1000)-1;
d=1-4*b.*a*x.^2;
y=(-1+sqrt(d))./(2*b);
% add noise
x = x + s*randn(size(x));
y = y + s*randn(size(y));
% Matt's svd method
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
[~,~,V]=svd(A-mean(A,1),0);
abc=V(:,end);
abc=abc/abc(end);
% Plot
a=abc(1)
b=abc(2)
c=abc(3);
xx = linspace(-1,1);
yy = linspace(-1,1).';
z = a*xx.^2+b*yy.^2+yy;
close all
figure
contour(xx,yy,z,[0 0],'r');
hold on
plot(x,y,'.')
axis equal
legend('fit','data')
kdb acml
on 13 Nov 2020
This method does not seem to be robust with only part of the data is observed
I think I've managed to improve it:
a=0.2;
b=0.2;
c=1;
s=0.01; % noise std
% simulate some (x,y) data
x=2*rand(1,1000)-1;
d=1-4*b.*a*x.^2;
y=(-1+sqrt(d))./(2*b);
% add noise
x = x + s*randn(size(x));
y = y + s*randn(size(y));
% Matt's svd method
sx=std(x); sy=std(y);
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
s=std(A,1);
[~,~,V]=svd(A./s,0);
abc=V(:,end)./s(:);
abc=abc/abc(end);
% Plot
a=abc(1)
b=abc(2)
c=abc(3),
xx = linspace(-1,1);
yy = linspace(-1,1).';
z = a*xx.^2+b*yy.^2+yy;
close all
figure
plot(x,y,'k.')
hold on
contour(xx,yy,z,[0 0],'Color','c','LineWidth',2);
axis equal
legend('data','fit')
ylim([-0.3,0.3])
Bruno Luong
on 13 Nov 2020
Edited: Bruno Luong
on 13 Nov 2020
Better indeed. I would argue this is even "more" correct (increase the noise s to 0.03 you'll see the effect, full code tlsqr.m attached).
[x,y]=deal(x(:),y(:));
A=[x.^2,y.^2,y];
C=cov(A);
S=sqrtm(C);
S=diag(diag(S));
% your method is equivalent to use
% S=diag(std(A)) % or
% S=diag(sqrt(diag(C)));
[~,~,V]=svd(A/S,0);
abc=S\V(:,end);
abc=abc/abc(end);
The method is still bother me great time, since the scalling affects both signal spread and noise spread. The noise in A is no longer Gaussian since it's not linear. etc...

Bruno Luong
on 12 Nov 2020
Edited: Bruno Luong
on 12 Nov 2020
Why not just solving using the linear algebra, seem this straighforward method does the job
xy = load('new.txt');
x = xy(:,1);
y = xy(:,2);
P = -([x.^2,y.^2] \ y);
a = P(1);
b = P(2);
c = 1;
% Plot
xi = linspace(min(x),max(x));
yi = linspace(min(y),max(y)).';
z = reshape(a*xi.^2+b*yi.^2+c*yi,[length(yi) length(xi)]);
close all
figure
contour(xi,yi,z,[0 0],'r');
hold on
plot(x,y,'.')
legend('fit','data')

5 Comments
Bruno Luong
on 12 Nov 2020
Honestly; this is just for illustration. I have no idea of the noise characteristic of the experimental data provided by OP, and furthermore the algebric method has always a bias (I don't fit anything directly related to x/y metric here). I'm not even sure if x and y has the same unit (necessary condition for TLSQR). So before all those issue sorted out, no need to bother.
kdb acml
on 13 Nov 2020
Bruno Luong
on 13 Nov 2020
Edited: Bruno Luong
on 13 Nov 2020
Vast topic. All literature of regression deals somewhat with noise issue. For starter you can look at noise covariance matrix (normalized Gaussian noise), regularization technique: art of find the right balance between bias - systematic error - and statistics error - which depends on the inverse of the Hessian at the minimum, meaning the way to formulate the objective function to achieve the regression goal.
In practice noise sometime is not Gaussian, and it's a big mess even at the theoretical level.
If you really want to fire againts bias, you need first to understand/characterize your measurement noise. If you start to says "I do not know what noice characteristics means" then you have a long long way to go.
The total-variation-lsq try to deal with problem where the error is affected both measurement query points and values, where Matt try to inspire from by using SVD, but his nice idea - sorry to say - is flawed for various reasons.
EDIT: Matt tlsq method is much better after data "normalization".
kdb acml
on 16 Nov 2020
Categories
Find more on Linear Least Squares in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
