What is the most computationally efficient factorization of a matrix A?

7 views (last 30 days)
I have an exam in 24 hours for my Linear Algebra and Geometry course and we have a section with MATLAB related questions. In a simulation test, I was asked to "solve the linear system Ax=b by the solution of two triangular systems, using the most computationally efficient factorization of a matrix A". What does this mean? What is "the solution of two triangular systems"? How would I choose the most computationally efficient factorization of a matrix?
The question I am talking about is the following, word for word:
Let Ax = b, a linear system of order 18, where A is a symmetric, tridiagonal matrix with all elements equal to 6 on the main diagonal and to 3 on the superior and inferior codiagonals. The elements of b are linearly spaced numbers in [8,11]. Compute the eigenvalues of A and, based on their properties, solve the linear system Ax = b by the solution of two triangular systems, using the most computationally efficient factorization of A. The 1-norm of the vector obtained by summing the vector that is the solution of the inferior triangular system and the one that is the solution of the superior triangular system is, approximately:
A) 8.4419e+01
B) 3.4638e+00
C) 7.5558e-01
D) 9.5155e-01
E) 6.5559e+01
Now, we went over all this with a friend and even asked chatgpt to give us an idea. In our curriculum, the main factorizations we learned about are the PA=LU, the Choleski, SVD and the QR factorization. We thought maybe after seeing all the eigenvalues are positive, and thus the matrix positive definite, the choleski should be used. But we aren't really sure. Why would the others be less efficient? With the code chatgpt gave us, tweaked a little here and there, we got the answer right using the choleski. But we really aren't sure we understand the wording of the question and the systems at play. Here is the code in question:
Thank you in advance for an answer to such a long question.
clear, clc
n = 18;
A = eye(n)*6 + diag(ones(n-1,1), 1)*3 + diag(ones(n-1,1), -1)*3;
b = linspace(8,11,18);
eigens = eig(A)
eigens = 18x1
0.0818 0.3251 0.7232 1.2652 1.9363 2.7183 3.5898 4.5271 5.5045 6.4955
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
R = chol(A);
disp("R'*R*x = b so y = R*x = b")
R'*R*x = b so y = R*x = b
Y = R'\b';
x = R\Y;
finalVector = x + Y;
norm(finalVector, 1)
ans = 65.5595

Answers (3)

Christine Tobler
Christine Tobler on 1 Jul 2024
Your solution is correct.
Of the factorizations you know (PA=LU, the Choleski, SVD and the QR factorization), only LU and Cholesky involve solving two triangular systems, and LU additionally requires a permutation of the vector b, making the next part of the question more complicated to apply. So that already strongly hints towards Chokesy being the right choice.
Cholesky also happens to be the quickest of these four factorizations. This can be seen by counting the operations for computing the factorization - if it's a question in your exam, I would assume these operation counts are mentioned somewhere in your lecture notes.
Small aside, not relevant to the multiple-choice question:
Computing the eigenvalues is more expensive than the Cholesky or the PA=LU decomposition. Instead, it's sufficient to just check that the input is symmetric (a cheap check using issymmetric) and then call chol on it. If the matrix doesn't have all positive eigenvalues, the chol command will error out:
R = chol(zeros(4))
Error using chol
Matrix must be positive definite.

Umar
Umar on 30 Jun 2024
Hi Eyup,
It seems like you're struggling with understanding the concept of solving the linear system Ax=b by the solution of two triangular systems and choosing the most computationally efficient factorization of a matrix A. Let me break it down for you.
When you're asked to solve the linear system Ax=b by the solution of two triangular systems, it means that you need to decompose matrix A into two triangular matrices (lower and upper) and then solve two separate linear systems using these triangular matrices. This method is often used to simplify complex matrix equations and make them easier to solve.
Now, as for choosing the most computationally efficient factorization of a matrix A, it depends on the properties of the matrix and the specific requirements of your problem. In your case, since all eigenvalues are positive and A is symmetric, using Cholesky factorization would indeed be a good choice. Cholesky factorization is specifically designed for symmetric positive definite matrices, making it more efficient in such cases.
As for why other factorizations like PA=LU, SVD, or QR might be less efficient in this scenario, it comes down to their applicability to different types of matrices and computational complexity. For instance, SVD is generally slower than Cholesky for solving linear systems with symmetric positive definite matrices.
The code you've provided seems to implement Cholesky factorization correctly and gives you the right answer.

Torsten
Torsten on 30 Jun 2024
Edited: Torsten on 30 Jun 2024
Prove that the matrix is positive definite and count the number of FLOPs of Cholesky compared to the other factorizations.
Section "Computation" under

Categories

Find more on Linear Algebra in Help Center and File Exchange

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!