matlab - My example shows SVD is less numerically stable than QR decomposition -


i asked question in math stackexchange, seems didn't enough attention there asking here. https://math.stackexchange.com/questions/1729946/why-do-we-say-svd-can-handle-singular-matrx-when-doing-least-square-comparison?noredirect=1#comment3530971_1729946

i learned tutorials svd should more stable qr decomposition when solving least square problem, , able handle singular matrix. following example wrote in matlab seems support opposite conclusion. don't have deep understanding of svd, if @ questions in old post in math stackexchange , explain me, appreciate lot.

i use matrix have large condition number(e+13). result shows svd larger error(0.8) qr(e-27)

% linear regression between y , x data= [ 47.667483331 -122.1070832; 47.667483331001 -122.1070832 ]; x = data(:,1); y = data(:,2);  x_1 =  [ones(length(x),1),x];  %% %svd method [u,d,v] = svd(x_1,'econ'); beta_svd = v*diag(1./diag(d))*u'*y;   %% qr method(here 1 can use "\" operator, same result tested. wrote down backward substitution educate myself) [q,r] = qr(x_1) %now backward substitution [nr nc] = size(r) beta_qr=[] y_1 = q'*y = nc:-1:1     s = y_1(i)     j = m:-1:i+1         s = s - r(i,j)*beta_qr(j)     end     beta_qr(i) = s/r(i,i) end  svd_error = 0; qr_error = 0; i=1:length(x)    svd_error = svd_error + (y(i) - beta_svd(1) - beta_svd(2) * x(i))^2;    qr_error = qr_error + (y(i) - beta_qr(1) - beta_qr(2) * x(i))^2; end 

you svd-based approach same pinv function in matlab (see pseudo-inverse , svd). missing though (for numerical reasons) using tolerance value such singular values less tolerance treated zero.

if refer edit pinv.m, can see following (i won't post exact code here because file copyrighted mathworks):

[u,s,v] = svd(a,'econ'); s = diag(s); tol = max(size(a)) * eps(norm(s,inf)); % .. use above tolerance truncate singular values invs = diag(1./s); out = v*invs*u'; 

in fact pinv has second syntax can explicitly specify tolerance value pinv(a,tol) if default 1 not suitable...


so when solving least-squares problem of form minimize norm(a*x-b), should understand pinv , mldivide solutions have different properties:

  • x = pinv(a)*b characterized fact norm(x) smaller norm of other solution.
  • x = a\b has fewest possible nonzero components (i.e sparse).

using example (note rcond(a) small near machine epsilon):

data = [     47.667483331    -122.1070832;     47.667483331001 -122.1070832 ]; = [ones(size(data,1),1), data(:,1)]; b = data(:,2); 

let's compare 2 solutions:

x1 = a\b; x2 = pinv(a)*b; 

first can see how mldivide returns solution x1 1 0 component (this valid solution because can solve both equations multiplying 0 in b + a*0 = b):

>> sol = [x1 x2] sol =  -122.1071   -0.0537          0   -2.5605 

next see how pinv returns solution x2 smaller norm:

>> nrm = [norm(x1) norm(x2)] nrm =   122.1071    2.5611 

here error of both solutions acceptably small:

>> err = [norm(a*x1-b) norm(a*x2-b)] err =    1.0e-11 *          0    0.1819 

note use mldivide, linsolve, or qr give pretty same results:

>> x3 = linsolve(a,b) warning: matrix close singular or badly scaled. results may inaccurate. rcond =  2.159326e-16.  x3 =  -122.1071          0  >> [q,r] = qr(a); x4 = r\(q'*b) x4 =  -122.1071          0 

Comments