In article <1992May11.070411.15451@dutrun2.tudelft.nl> rcpshdb@dutrun2.tudelft.nl (Han de Bruijn) writes:
>In article <1992May8.181301.9243@news.cs.indiana.edu> pete shirley:
>> [ Linear eqs. ] What is the fast way to solve this? For 2 by 2
>> I use Cramer's rule, but several people have told me for N > 2 that
>> is stupid. What is fastest?
>
>For small matrices up to and including 4 x 4, Cramer's rule is fastest.
>--
>* Han de Bruijn; Applications&Graphics | "A little bit of Physics * No
>* TUD Computing Centre; P.O. Box 354 | would be NO idleness in * Oil
>* 2600 AJ Delft; The Netherlands. | Mathematics" (HdB). * for
>* E-mail: Han.deBruijn@RC.TUDelft.NL --| Fax: +31 15 78 37 87 ----* Blood
Cramer's rule is not a good idea, even for 2x2 systems. I am
attaching a posting I made some time ago which shows why. For garden
variety problems, Gaussian elimination with partial pivoting is the
algorithm of choice. If you expect ill-conditioned problems, you can
use a condition estimator or any one of a number of rank revealing
decompositions. A bibliography is appended. (Incidentally, my
bibliographic data base along with a program to search on keywords is
available by anonymous ftp from thales.cs.umd.edu in the directory
pub/references.)
Pete Stewart
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
An earlier posting, in which I noted that Cramer's rule is unstable,
drew some queries. Since the question is a hardy perennial, I am
appending a short matlab code for people to play with. It generates
an ill-conditioned system whose exact solution is x = [1,2]', rounds
it a bit, and solves it twice--once by Cramer's rule and again by
Gaussian elimination. The printout consists of the condition number,
and the solution and its scaled residual for each algorithm.
Here is sample output (slightly edited).
conda = 5.7268e+14
xc rc xg rg
1.0164e+00 -1.5707e-03 1.0302e+00 -3.7064e-17
2.0000e+00 -4.9346e-03 1.9697e+00 0
Note that the solution by Cramer's rule (xc) is slightly more accurate
than the solution by Gaussian elimination (xg). Nonetheless, the
residuals (rc and rg) show that the solution by Cramer's rule
satisfies the equations to only three figures, whereas the solution by
Gaussian elimination satisfies it right up to the rounding unit. The
latter is a consequence of the backward stability of Gaussian
elimination, a property that Cramer's rule does not share.
One caveat. The above output is not entirely typical. Because a two
by two system gives little opportunity for rounding error to dig in,
the residuals for Gaussian elimination are often zero. Moreover,
Gaussian elimination sometimes produces the exact solution, as does
Cramer's rule--though less frequently.
Pete Stewart
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This is a Matlab M-file. To execute it, start up Matlab in this
% directory and type
%
% >> Cramer
%
% at the Matlab prompt.
%
% Cramer.m generates an ill-conditioned two by two system whose exact
% solution is x = [1,2]'. Then it rounds the matrix a bit, and solves
% the system twice--once by Cramer's rule and again by Gaussian
% elimination. It prints out the condition number, and the solution and
% its scaled residual for each algorithm.
format short e;
rand('normal');
fac = 1.e13; % Condition control
a = rand(2); % Get a random a with the second row equal
a(2,:) = pi*a(1,:); % to pi times the first, and fix it to
a = fix(fac*a); % log10(fac) digits.
conda = cond(a) % Print the condition number of a.
x = [1,2]'; % Form a right hand side with solution
b = a*x; % whose components are 1 and 2.
a = a/fac; % Introduce some initial rounding error
b = b/fac; % by scaling back.
% Solution and residual by Cramer's rule
d = a(1,1)*a(2,2) - a(2,1)*a(1,2);
xc = [b(1)*a(2,2)-b(2)*a(1,2), a(1,1)*b(2)-a(2,1)*b(1)]'/d;
rc = (b - a*xc)/(norm(a)*norm(xc));
% Solution and residual by Gaussian elimination
xg = a\b;
rg = (b - a*xg)/(norm(a)*norm(xg));
[xc, rc, xg, rg] % Print out the results
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibitem{bisc:90}
C.~H. Bischof.
\newblock Incremental condition estimation.
\newblock {\it SIAM Journal on Matrix Analysis and Applications}, 11:312--322,
1990.
\bibitem{biha:89}
C.~H. Bischof and P.~C. Hansen.
\newblock {\it Structure-Preserving and Rank-Revealing {QR}-Factorizations}.
\newblock Preprint~MCS-P100-0989, Mathematics and Computer Science Division,
Argonne National Laboratory, 1989.
\bibitem{bilp:89}
C.~H. Bischof, J.~G. Lewis, and D.~J. Pierce.
\newblock {\it Incremental Condition Estimation for General Matrices}.
\newblock Preprint~MCS-P106-0989, Mathematics and Computer Science Division,
Argonne National Laboratory, 1989.
\bibitem{bipl:90}
C.~H. Bischof, D.~J. Pierce, and J.~G. Lewis.
\newblock Incremental condition estimation for sparse matrices.
\newblock {\it SIAM Journal on Matrix Analysis and Applications}, 11:644--659,
1990.
\newblock Cited in {\AA ke Bj\"orck's} bibliography on least squares. Available
by anonymous ftp from math.liu.se in pub/references.
\bibitem{bishr:89}
C.~H. Bischof and G.~M. Shroff.
\newblock {\it On Updating Signal Subspaces}.
\newblock Preprint~MCS-P101-0989, Mathematics and Computer Science Division,
Argonne National Laboratory, 1989.
\bibitem{chfp:77}
S.~P. Chan, R. Feldman, and B.~N. Parlett.
\newblock Algorithm 517: a program for computing the condition numbers of
matrix eigenvalues without computing eigenvectors.
\newblock {\it ACM Transactions on Mathematical Software}, 3:186--203, 1977.
\bibitem{chan:87}
T.~F. Chan.
\newblock Rank revealing {QR}~factorizations.
\newblock {\it Linear Algebra and Its Applications}, 88/89:67--82, 1987.
\bibitem{chha:90}
T.~F. Chan and P.~C. Hansen.
\newblock Computing truncated singular value decomposition least squares
solutions by rank revealing {QR}-factorizations.
\newblock {\it SIAM Journal on Scientific and Statistical Computing},
11:519--530, 1990.
\bibitem{chha:91}
T.~F. Chan and P.~C. Hansen.
\newblock Low-rank revealing {QR}~factorizations.
\newblock 1990.
\newblock Manuscript, Department of Mathematics, University of California, Los
Angeles.
\bibitem{chha:92}
T.~F. Chan and P.~C. Hansen.
\newblock Some applications of the rank revealing {QR}~factorization.
\newblock {\it SIAM Journal on Scientific and Statistical Computing},
13:727--741, 1992.
\bibitem{clcv:82}
A.~K. Cline, A.~R. Conn, and C.~F. {Van Loan}.
\newblock Generalizing the {LINPACK} condition estimator.
\newblock In J.P. Hennart, editor, {\it Numerical Analysis}, Springer Verlag,
1982.
\newblock Cited in {\AA ke Bj\"orck's} bibliography on least squares. Available
by anonymous ftp from math.liu.se in pub/references.
\bibitem{cmsw:79}
A.~K. Cline, C.~B. Moler, G.~W. Stewart, and J.~H. Wilkinson.
\newblock An estimate for the condition number of a matrix.
\newblock {\it SIAM Journal on Numerical Analysis}, 16:368--375, 1979.
\bibitem{goks:76}
G.~H. Golub, V. Klema, and G.~W. Stewart.
\newblock {\it Rank Degeneracy and Least Squares Problems}.
\newblock Technical Report~TR-456, Department of Computer Science, University
of Maryland, 1976.
\bibitem{gore:70}
G.~H. Golub and C. Reinsch.
\newblock Singular value decomposition and least squares solution.
\newblock {\it Numerische Mathematik}, 14:403--420, 1970.
\newblock Also in \cite[pp.134--151]{wire:71}.
\bibitem{grst:76}
W.~B. Gragg and G.~W. Stewart.
\newblock A stable variant of the secant method for solving nonlinear
equations.
\newblock {\it SIAM Journal on Numerical Analysis}, 13:880--903, 1976.
\bibitem{hage:84}
W.~W. Hager.
\newblock Condition estimators.
\newblock {\it SIAM Journal on Scientific and Statistical Computing},
5:311--316, 1984.
\newblock Cited in \cite{high:87}.
\bibitem{high:87}
N.~J. Higham.
\newblock A survey of condition number estimation for triangular matrices.
\newblock {\it SIAM Review}, 29:575--596, 1987.
\bibitem{hopa:92}
Y.~P. Hong and C.-T. Pan.
\newblock Rank-revealing {QR}~factorizations and the singular value
decomposition.
\newblock {\it Mathematics of Computation}, 58:213--232, 1992.
\bibitem{kaha:72}
W. Kahan.
\newblock {\it Conserving Confluence Curbs Ill-Conditioning}.
\newblock Technical Report~6, Computer Science Department, University of
California, Berkeley, 1972.
\bibitem{nats:83}
M. Natori and A. Tsukamoto.
\newblock A fast method for estimating the condition number of a matrix.
\newblock {\it Journal of Information Processing}, 6:138--140, 1983.
\bibitem{pipl:92}
D.~J. Pierce and R.~J. Plemmons.
\newblock Fast adaptive condition estimation.
\newblock {\it SIAM Journal on Matrix Analysis and Applications}, 13:274--291,
1992.
\bibitem{shbi:90}
G.~M. Shroff and C.~H. Bischof.
\newblock {\it Adaptive Condition Estimation for Rank-One Updates of
QR~Factorizations}.
\newblock Preprint~MCS-P166-0790, Mathematics and Computer Science Division,
Argonne National Laboratory, 1990.
\bibitem{stew:80}
G.~W. Stewart.
\newblock The efficient generation of random orthogonal matrices with an
application to condition estimators.
\newblock {\it SIAM Journal on Numerical Analysis}, 17:403--404, 1980.
\bibitem{stew:84}
G.~W. Stewart.
\newblock Rank degeneracy.
\newblock {\it SIAM Journal on Scientific and Statistical Computing},
5:403--413, 1984.
\bibitem{stew:91a}
G.~W. Stewart.
\newblock {\it Updating a Rank-Revealing {ULV}~Decomposition}.
\newblock Technical Report~CS-TR 2627, Department of Computer Science,
University of Maryland, 1991.
\bibitem{stew:90b}
G.~W. Stewart.
\newblock {\it An Updating Algorithm for Subspace Tracking}.
\newblock Technical Report~CS-TR 2494, Department of Computer Science,
University of Maryland, 1990.
\newblock To appear in IEEE Transactions on Signal Processing.
\bibitem{vdwe:79}
A. van~der~Sluis and G.~W. Veltkamp.
\newblock Restoring rank and consistency by orthogonal projection.
\newblock {\it Linear Algebra and Its Applications}, 28:257--278, 1979.
\bibitem{wire:71}
J.~H. Wilkinson and C. Reinsch.
\newblock {\it Handbook for Automatic Computation. Vol.~II Linear Algebra}.
\newblock Springer, New York, 1971.
\bibitem{xuka:90}
G. Xu and T. Kailath.
\newblock Fast signal-subspace decompsotion\,---\,part i: {Ideal} covariance
matrices.
\newblock 1990.
\newblock Manuscript submitted to ASSP. Information Systems Laboratory,
Stanford University.
\bibitem{xuka:90a}
G. Xu and T. Kailath.
\newblock Fast signal-subspace decomposition\,---\,part {II}: {Sample}
covariance matrices.
\newblock 1990.
\newblock Manuscript submitted to ASSP. Information Systems Laboratory,
Stanford University.