Section7.5QR decomposition by Householder reflectors
- The LU factorization stores the work of Gaussian elimination, QR stores the Householder triangulation process (see below), and the Cholesky factorization stores Cholesky's algorithm.
- Get the free 'QR-Zerlegung einer Matrix' widget for your website, blog, Wordpress, Blogger, or iGoogle. Find more Widget Gallery widgets in Wolfram Alpha.
- This Householder QR decomposition is faster, but less numerically stable and less feature-full than FullPivHouseholderQR or ColPivHouseholderQR. This class supports the inplace decomposition mechanism.
- Householder QR Householder transformations are simple orthogonal transformations corre- sponding to re ection through a plane.
A = QR: This factorization can be constructed by three methods: 1. Givens † Property 3.3 (Reduced QR) Suppose the rank of A 2 Rm£n is n for which A = QR is known. Then A = QR where Q and R are submatrices of Q and R given respectively by Q = Q = Q(1: m;1: n); R = R(1: n;1: n): Moreover Q has.
¶Another algorithm for determining the (QR) decomposition of a matrix which requires fewer operations and is less susceptible to rounding error is the method of Householder reflectors. The method depends upon the following theorem.
Given two such vectors (vec{x},vec{w}inR^ntext{,}) define (vec{v}=vec{w}-vec{x}text{,}) and consider the matrix
Qr Householder Pokemon
While it may be confusing to think that this is a matrix, recall: since (vec{v}) is an (ntimes 1) column matrix, (vec{v}vec{v}^T) is an (ntimes n) symmetric matrix. In fact,
and
A matrix satisfying (P^2=P) is called a projection matrix. The purpose of using (P) is that (vec{x}-Pvec{x}) is the projection of (vec{x}) onto (vec{w+x}text{,}) and therefore (vec{x}-2Pvec{x} = vec{w}text{!})
For brevity, I'll refer to the (QR) decomposition by Householder reflectors as HHQR. The process of HHQR for a matrix (A) iterates through the columns of (A) just like Gram-Schmidt, but with far less numerical instability. We'll explain the process without use of an example, as the process becomes extremely unwieldy in exact arithmetic.
Suppose we begin with a (mtimes n) matrix 1 The discussion of the method contained here originally appears in Timothy Sauer's excellent Numerical Analysis, 2nd edition. (A = [a_{i,j}]text{,}) where (m>ntext{.}) We will choose as our first vector (vec{x}_1 = vc{a_{1,1},a_{2,1},ldots, a_{m,1}}text{,}) the first column of (Atext{.}) Since we desire to reflect it onto a coordinate axis vector of the same magnitude, we can choose (vec{w_1}=vc{pmabs{vec{x}_1},0,0,ldots,0}text{.}) Since subtracting nearly identical floating point numbers is problematic and leads to rounding error, we will choose the sign of the first coordinate of (vec{w}_1) to be the opposite of the sign of the first coordinate of (vec{x}_1text{.}) This allows us to determine the first Householder reflector (H_1) such that (H_1vec{x}_1 = vec{w}_1text{.}) Now, putting (R_1 = H_1A) gives us a matrix whose first column is (vec{w}_1text{,}) by construction; notably, this is the first iteration and in the first column (R_1) is the start of an upper triangular matrix. Specifically, if we write (R_1=[r_{i,j}]text{,}) we have (r_{i,1} = 0) whenever (i>1text{.})
Householder Qr Complexity
In order to continue this process, we will produce from (R_1) a new matrix (R_2=[r_{i,j}]) which has (r_{i,j}=0) whenever (i>j) and (jleq 2text{.}) We do so by ignoring the first row and column of (R_1text{;}) doing so gives us (R'_1text{,}) an ((m-1)times(n-1)) matrix. If we let (vec{x}_2) be the first column of (R'_1) and (vec{w}_2=vc{pmabs{vec{x}_2},0,0,ldots,0}text{,}) we can find the Householder reflector (hat{H}_2) such that (hat{H}_2vec{x}_2=vec{w}_2text{.}) Say then that (hat{H}_2 = [hat{h}_{i,j}]text{.}) We will create our full reflection matrix by inserting (hat{H}_2) into the bottom right corner of an identity matrix. Here is the formula for (h_{i,j}) along with the more intuitive picture diagram:
Then (R_2=H_2R_1=H_2H_1Atext{.})
We continue this process in the way established: at each step, we reflect the first column of a submatrix onto a coordinate-axis vector of the same length, flipping signs of the first coordinate, and after processing (n) columns in this way we have (R=H_nH_{n-1}cdots H_2H_1Atext{,}) which is an upper triangular matrix. Moreover, the matrices (H_j) are all orthogonal matrices, so (H_j^{-1}=H_jtext{.}) Hence we obtain