how to solve implicit equation with two variables

Dear fellows,
I have a function like this pf=f(p1,p2). The problem is "pf" is a vecotor so in fact there are two output for this function, pf(1) and pf(2). And there is not explicit expression for function f. It is a big function and involves a lot of logic compare, etc inside it. I want to solve for p1 and p2 such that pf(1)=1000 and pf(2)=1000. Do you know how to realize this?
Cheers, Xueqi

6 Comments

Hi xuequi,
If you have the Optimization Toolbox, the lsqnonlin() function can allow you to solve this equation. Basically, you would call it like this:
%I'm assuming p1 and p2 to be scalars
p0 = [0; 0]; %initial guesses for p1, p2
pf = [1000; 1000]; %target values for pf
f2 = @(p) f(p(1),p(2))-pf; %function to minimize
psolve = lsqnonlin(f2, p0); %this should solve for p1 and p2
xueqi
xueqi on 31 Jul 2013
Edited: xueqi on 31 Jul 2013
Hi, Thanks. I get the solution always stuck at the start point. If I plot the pf(1) and pf(2) in terms of p1 while fix a value for p2, then the figure looks like this. Do you a know a similar routine can handle the unsmooth function vectors?
It appears the output is not monotonic? Is it certain there is only one solution ?
You could also try fsolve() of @(p) sum(f2(p)) where f2 is as Matt Kindig shows.
xueqi
xueqi on 31 Jul 2013
Edited: xueqi on 31 Jul 2013
No it is not monotonic. but I will set the range of p1 and p1 to make the output stay in the monotonic area. why fsolve the sum of f2? Seems that to minimize the sum of the square of f2 is what I want. But anyway I tried it also stuck in the initial point
fsolve stopped because the relative norm of the current step, 5.704218e-08, is less than options.TolX = 1.000000e-06. The sum of squared function values, r = 3.738036e-04, is less than sqrt(options.TolFun) = 1.000000e-03.
Optimization Metric Options relative norm(step) = 5.70e-08 TolX = 1e-06 (default) r = 3.74e-04 sqrt(TolFun) = 1.0e-03 (default)
You are right, sum() could result in accidental zeros. The difficulty with sum(f2(p).^2) is that it will not have 0 crossings, just a single 0 (round-off permitting); if that is acceptable then fminbnd() would likely be a better choice than fsolve()
As I see, I think the problem comes from the function are far from smooth. It has numerous local minimum. Probably this is the reason it stuck at the initial point...So I think we should look for some way to get rid of this problem. How does fminbnd work better in this aspect?

Sign in to comment.

Answers (0)

Categories

Asked:

on 31 Jul 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!