Inconsistent answers from different computers using eig() function

I've been having issues lately with an inconsistency in answers from matlab across two different computers.
I have two sparse, double complex matrices (1800x1800 each) that are diagonally dominant. The command I'm running is
[vec,val]=eig(double(M1),double(-M2),'qz');
where M1 and M2 are the two matrices previously mentioned. A single computer gets the same answer every time, but the answers differ between computers.
I tried a simple test using the following code:
A=rand(5)+i*rand(5);
B=rand(5)+i*rand(5);
[vec,val]=eig(double(A),double(-B),'qz');
and both computers actually get the same answer. The obvious difference here is that A and B are all approximately the same order and are dense matrices as compared to M1 and M2.
I was wondering if there is anything specific I should be looking for in M1 and M2 to determine what is causing the different answers. Are there any ways I can make these answers match?
Further info:
  • Both computers are running 64bit Linux Matlab 2012a (glnxa64) (7.14.0.739)
  • Both computers are running Ubuntu 10.04
  • Computer 1 is running: 2 Intel® Xeon® CPU X5482 @ 3.20GHz (8 cores total)
  • Computer 2 is running: 2 Intel® Xeon® CPU X5670 @ 2.93GHz (12 cores total, 24 threads)
Here is a dropbox link to the M1 and M2 matrix in question https://www.dropbox.com/s/zwq6zwfkl3y3ha5/Matrices.mat

2 Comments

How different are the answers? Have you tried running single threaded?
I just looked at your matrices. They're stored as dense matrices, and M2 is singular (which means that you have infinite eigenvalues). Also, you're not taking advantage of sparsity at all ... normally eigs is what you'd use.
Are you sure you're computing that which you intend to be computing?

Sign in to comment.

 Accepted Answer

I tried running single threaded and although the numbers do change slightly, they still do not match and they have the same order error.
Comparing the eigenvalues (sorted of course), the largest (relative) errors I am seeing are on the order of 10e-12. Converting my eigenvalues to single precision makes most of them match (a few of them round differently on the last digit).
However, the relative error for the eigenvectors is quite vast, many are order 100.
I'd like to keep the double precision, but if there is a way to make the eigenvectors match up in single precision then this will suffice.
I have not yet normalized the matrices, but I'll try this and get back to you.

4 Comments

So I attempted to normalize the matrices, but alas the determinants are zero. I tried the balance function (with and without 'noperm') but this seems have made the problem worse. As mentioned in the 'balance docs' if a matrix is badly scaled, balancing can make things worse because very small numbers now become significant.
So I've made another test case, and this truly baffles me. I made a quick code to develop 2 random, sparse, poorly scaled matrices.
clear all;clc;
A=rand(10,10).*10.^(-10+20.*rand(10,10))+i*rand(10,10).*10.^(-10+20.*rand(10,10));
B=rand(10,10).*10.^(-10+20.*rand(10,10))+i*rand(10,10).*10.^(-10+20.*rand(10,10));
elim=round(rand(10,10));
A=A.*elim
elim=round(rand(10,10));
B=B.*elim
save AB.mat A B
Loading the A and B matrix onto both computers and running
[vec, val]=eig(A,-B,'qz')
produces IDENTICAL answers for all eigenvalues and eigenvectors. This works using single thread and multi-thread.
Side note: Using the cond() function, the A and B matrices generated are usually much worse then my original M1 matrix. cond(M2) returns 'inf'. Other than this being really bad, I'm not exactly sure what a condition of infinity means, nor do I know how to fix it. Could this be what is causing the problem?
A and B aren't sparse, or large ...
Are you surprised, when the accuracy of the calculation of Eigen-vectors suffers from input matrices with an Inf condition?
You cannot "fix" a high condition number, like you cannot "fix" the problem, that a/b cannot reply a valid result when b is 0. A similar problem occurs for the difference quotient:
d = (f(x+h)-f(x)) / h
When h gets very small, the round-off errors in the calculation of f() are extremely amplified due to the division by a small number. Therefore it is impossible to obtain an "accurate" approximation of the derivative by this method. But you can usually find a magnitude of h such that half the number of digits are correct - in other word: When the calculations are performed in double precision, the result is correct in single precision.
Therefore I assume currently, that it is not a problem but an effect of the limited precision.
Ok that makes sense. I guess I had not realized how badly conditioned my input matrices were before I posted the question. I didn't create these matrices, so I'll have to go back and see if there is a different way to structure them. At least now I understand where the errors are coming from and why we are losing accuracy, so I'm going to consider this question answered. Thanks for all of your help!

Sign in to comment.

More Answers (1)

When the two machine use a different number of threads, the results are expected to be affected by rounding effects (which I would not call rounding "errors" here). So please show us the magnitude of the differences.
If the (relative) differences are large, this could be caused by the condition of the eigen-value problem.
Notice than for eig(A, B) the inputs are not normalized, such that normalizing A and B externally might improve the stability, see doc eig.

Categories

Find more on Linear Algebra in Help Center and File Exchange

Products

Asked:

on 31 Jul 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!