Is there any way to accelerate the solving of a series of large sparse positive definite linear equations "Ax=b" with same "A" and different "b"?

35 views (last 30 days)
I am trying to solve a series of the linear equations Ax=b. A is a large sparse positive definite matrix, in n*n. And b is a vector, in n*1. Among this equations, "A" matrix are the same, while the vector "b" are different. They both come from finite element method (e.g. same geometry and different loadings in structral machanics). I used to apply the PCG (preconditioned conjugate gradients) iterative method with an incomplete Cholesky preconditioner to solve every one of them. I would like to know whether I could get some accelerating idea from the same property of matrix "A"?
To note some details,
1. At first, I think I'd better get the inverse of "A" directly. Then I could calculate all the solution of these equations easily. But I found it is impossible to get the inverse of a large sparse positive definite matrix. I learned that It is very expensive and less precise.
2. The only way I could use now is to run the PCG iterative method with an incomplete Cholesky preconditioner parallelly. It takes large RAM.
3. I may make a preconditioner with smaller "droptol" for matrix "A". All the PCG would converge faster with it as below. But this step is very slow.
tol = 1e-6;
L1 = ichol(A, struct('type','ict','droptol',tol));
Any suggestion would be appreciated. Thank you.

Accepted Answer

Bruno Luong
Bruno Luong on 3 Aug 2020
Edited: Bruno Luong on 3 Aug 2020
Two things come to my mind:
Why can't you build all the b together then make a single inversion
b = [b1 b2 ... bp]
x = A \ b
this will use direct method by single decomposition of the matrix, so it might help.
You might apply reordering technique before the decomposition/preconditioning.
Preconditioning is like invert the matrix by direct-method. So you need to find a compromise between the cost vs quality to get the best overall runtime.
  2 Comments
wei zhang
wei zhang on 4 Aug 2020
It is much faster when I build the b vectors together!
In the case that size(A) is 117338 * 117338 and b is 117338*1000. It takes only 28s on my slow PC. The PCG in parallelI way takes 800+s. This is out of my expectation very much! I would post time profiles with other A's size later.
May I ask which one is more precise, the direct method you suggested or PCG? I don't know much about the precise of them. I only know PCG's precision is controlled by the input tolerance. How about the direct method? Should I worry about it?
Bruno Luong
Bruno Luong on 4 Aug 2020
The direct method is usually more accurate? PCG converges fully when the number iterations is equal to the dimension of the matrix (this is the theory) and in pratice it stops well before.

Sign in to comment.

More Answers (2)

Christine Tobler
Christine Tobler on 3 Aug 2020
If you are able to solve for one vector using A \ b, you could pass in a matrix containing all your right-hand sides in instead: A \ [b1 b2 ... bn]. Even if this is slower than PCG for an individual right-hand side vector, it's possible that it's faster for a large number of them: In A \ b, a large precomputation (Cholesky factorization of A) is needed, which can be reused for all vectors b1, ..., bn. You could also take a look at the decomposition object, which will store that precomputation in an object which is able to solve the system for individual right-hand sides.
If it's not practical to call A \ b, calling many instances of PCG in parallel sounds like the best approach. For FEM-based matrices, sometimes it's also possible to construct a preconditioner by building the same matrix for a coarser mesh, solving with the matrix for this coarser mesh combined with interpolating between the finer and coarser mesh - but this of course depends a lot on the mesh that's being used.
  3 Comments
wei zhang
wei zhang on 4 Aug 2020
Thank you for your advice. I still have some doubts. Is this coarse/fine grid approach called Algebraic multigrid method (AMG)? Could all the equations (with same "A") share the same AMG conditioner? Then do a AMG+PCG in parallel seems a good idea?
Christine Tobler
Christine Tobler on 5 Aug 2020
Yes, all equations with the same "A" can share the same conditioner. The Algebraic multigrid method (AMG) is a special variant of Multigrid methods which mimics their behavior when only a sparse matrix is given instead of the whole underlying FEM problem being known. So if you know the underlying problem, it may be better to apply a standard multigrid method directly.
The multigrid methods can be seen as preconditioners to be used in PCG, but they are more usually applied directly to solve a FEM problem.
Another idea (which would possibly be higher effort): You could take the pcg.m function in MATLAB, and rewrite it so that whereever it currently works on one vector, it would apply to many vectors in one go. That might be more efficient as it would be doing batches of operations together - but the stopping condition of the pcg would have to be modified to only stop when the equation has been solved for all right-hand sides.

Sign in to comment.


Vladimir Sovkov
Vladimir Sovkov on 3 Aug 2020
It depends...
Besides inv(A), you can try A\eye(n), pinv(A)--all of them are equivalent for a well-conditioned A and different for a badly-conditioned A due to different algorithms (though pinv does not support the sparse matrix format).
You can try other decompositions in place the Cholesky one. In my experience, the SVD (for sparse matrices use "svds") is often efficient (with it you can easily make your own version of pinv).
The Matlab documentation suggests that lsqminnorm is efficient for sparse matrices.

Categories

Find more on Sparse Matrices in Help Center and File Exchange

Products


Release

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!