Gentle - Matrix Algebra - Theory, Computations and Applications in Statistics (Springer, 2007).pdf
(
3918 KB
)
Pobierz
434761195 UNPDF
James E. Gentle
Matrix Algebra
Theory, Computations, and Applications
in Statistics
James E. Gentle
Department of Computational
and Data Sciences
George Mason University
4400 University Drive
Fairfax, VA 22030-4444
jgentle@gmu.edu
Editorial Board
George Casella Stephen Fienberg Ingram Olkin
Department of Statistics Department of Statistics
University of Florida Carnegie Mellon University Stanford University
Gainesville, FL 32611-8545 Pittsburgh, PA 15213-3890
Department of Statistics
Stanford, CA 94305
USA
USA
USA
ISBN :978-0-387-70872-0
e-ISBN :978-0-387-70873-7
Library of Congress Control Number: 2007930269
© 2007 Springer Science+Business Media, LLC
All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street,
New York, NY, 10013, USA), except for brief excerpts in connection with reviews or scholarly
analysis. Use in connection with any form of information storage and retrieval, electronic
adaptation, computer software, or by similar or dissimilar methodology now known or hereafter
developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or
not they are subject to proprietary rights.
Printed on acid-free paper.
987654321
springer.com
To Marıa
Preface
I began this book as an update of
Numerical Linear Algebra for Applications
in Statistics
, published by Springer in 1998. There was a modest amount of
new material to add, but I also wanted to supply more of the reasoning behind
the facts about vectors and matrices. I had used material from that text in
some courses, and I had spent a considerable amount of class time proving
assertions made but not proved in that book. As I embarked on this project,
the character of the book began to change markedly. In the previous book,
I apologized for spending 30 pages on the theory and basic facts of linear
algebra before getting on to the main interest:
numerical
linear algebra. In
the present book, discussion of those basic facts takes up over half of the book.
The orientation and perspective of this book remains
numerical linear al-
gebra for applications in statistics
. Computational considerations inform the
narrative. There is an emphasis on the areas of matrix analysis that are im-
portant for statisticians, and the kinds of matrices encountered in statistical
applications receive special attention.
This book is divided into three parts plus a set of appendices. The three
parts correspond generally to the three areas of the book’s subtitle — theory,
computations, and applications — although the parts are in a different order,
and there is no firm separation of the topics.
Part I, consisting of Chapters 1 through 7, covers most of the material
in linear algebra needed by statisticians. (The word “matrix” in the title of
the present book may suggest a somewhat more limited domain than “linear
algebra”; but I use the former term only because it seems to be more commonly
used by statisticians and is used more or less synonymously with the latter
term.)
The first four chapters cover the basics of vectors and matrices, concen-
trating on topics that are particularly relevant for statistical applications. In
Chapter 4, it is assumed that the reader is generally familiar with the basics of
partial differentiation of scalar functions. Chapters 5 through 7 begin to take
on more of an applications flavor, as well as beginning to give more consid-
eration to computational methods. Although the details of the computations
viii
Preface
are not covered in those chapters, the topics addressed are oriented more to-
ward computational algorithms. Chapter 5 covers methods for decomposing
matrices into useful factors.
Chapter 6 addresses applications of matrices in setting up and solving
linear systems, including overdetermined systems. We should not confuse sta-
tistical inference with fitting equations to data, although the latter task is
a component of the former activity. In Chapter 6, we address the more me-
chanical aspects of the problem of fitting equations to data. Applications in
statistical data analysis are discussed in Chapter 9. In those applications, we
need to make statements (that is, assumptions) about relevant probability
distributions.
Chapter 7 discusses methods for extracting eigenvalues and eigenvectors.
There are many important details of algorithms for eigenanalysis, but they
are beyond the scope of this book. As with other chapters in Part I, Chap-
ter 7 makes some reference to statistical applications, but it focuses on the
mathematical and mechanical aspects of the problem.
Although the first part is on “theory”, the presentation is informal; neither
definitions nor facts are highlighted by such words as “Definition”, “Theorem”,
“Lemma”, and so forth. It is assumed that the reader follows the natural
development. Most of the facts have simple proofs, and most proofs are given
naturally in the text. No “Proof” and “Q.E.D.” or “ ” appear to indicate
beginning and end; again, it is assumed that the reader is engaged in the
development. For example, on page 270:
If
A
is nonsingular and symmetric, then
A
−
1
is also symmetric because
(
A
−
1
)
T
=(
A
T
)
−
1
=
A
−
1
.
The first part of that sentence could have been stated as a theorem and
given a number, and the last part of the sentence could have been introduced
as the proof, with reference to some previous theorem that the inverse and
transposition operations can be interchanged. (This had already been shown
before page 270 — in an unnumbered theorem of course!)
None of the proofs are original (at least, I don’t think they are), but in most
cases I do not know the original source, or even the source where I first saw
them. I would guess that many go back to C. F. Gauss. Most, whether they
are as old as Gauss or not, have appeared somewhere in the work of C. R. Rao.
Some lengthier proofs are only given in outline, but references are given for
the details. Very useful sources of details of the proofs are Harville (1997),
especially for facts relating to applications in linear models, and Horn and
Johnson (1991) for more general topics, especially those relating to stochastic
matrices. The older books by Gantmacher (1959) provide extensive coverage
and often rather novel proofs. These two volumes have been brought back into
print by the American Mathematical Society.
I also sometimes make simple assumptions without stating them explicitly.
For example, I may write “for all
i
” when
i
is used as an index to a vector.
I hope it is clear that “for all
i
” means only “for
i
that correspond to indices
Plik z chomika:
sigebryht
Inne pliki z tego folderu:
Eiselt - Linear Programming and Its Applications (Springer, 2007).pdf
(6656 KB)
Gentle - Matrix Algebra - Theory, Computations and Applications in Statistics (Springer, 2007).pdf
(3918 KB)
Golumbic - Graph Theory, Combinatorics and Algorithms (Springer, 2005).pdf
(3648 KB)
Jungnickel - Graphs, Networks and Algorithms 3e (Springer, 2008).pdf
(5366 KB)
Zegarelli - Logic for Dummies (Wiley, 2007).pdf
(5213 KB)
Inne foldery tego chomika:
Artificial Intelligence
Cryptography
Databases and SQL
Linux and UNIX
Methodology
Zgłoś jeśli
naruszono regulamin