Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Comp Stat

Where do computational mathematics and computational statistics converge?

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

This overview presents a discussion on several topics of importance shared by the fields of computational mathematics and computational statistics. A brief history of the advancements to the fields of mathematics and statistics leading to the modern computational era is provided, along with a discussion of the intersection of topics which comprise the two subjects. The foundational aspects shared by both computational mathematics and computational statistics are explored with elementary discussions suitable to nonexperts and aspiring students of the computational sciences. Finally, a discussion of the role played by computational mathematics and computational statistics in a few selected application areas is explored. WIREs Comput Stat 2014, 6:341–351. doi: 10.1002/wics.1313 This article is categorized under: Applications of Computational Statistics > Computational Mathematics Algorithms and Computational Methods > Numerical Methods
Types of matrices. (a) Dense matrix contains mostly nonzero entries. (b) Sparse matrix contains mostly zero entries. (c) Banded matrix contains nonzero entries only on a fixed number of upper and lower ‘bands’ from the diagonal. Banded matrices arising from discretizations of differential and partial differential equations often contain the same number throughout the band and may be symmetric such as this matrix is. (d) Lower triangular matrix contains nonzero entries only on or below the diagonal. (e) Upper triangular matrix contains zero entries only on or above the diagonal. (f) Upper Hessenberg matrix contains nonzero entries only above the sub‐diagonal. Similarly, there are lower Hessenberg matrices.
[ Normal View | Magnified View ]
The Jacobian matrix J(X) for a function F(X), X = {x1, x2,…, xn}, and F = {f1(X), f2(X),…, fn(X)}.
[ Normal View | Magnified View ]
A graphical demonstration of Newton's method. Newton's method finds the zeros of a function by iteratively seeking out the point where the tangent line of the solution point at the current iteration intercepts the x‐axis. In this example, one can find the left root of the parabola defined here by f(x) = (x − 2)2 − 10 by first taking an initial guess x0 which is done here using x0 = 0 and finding the corresponding tangent line drawn here as f1(x). The point where the tangent line f1(x) crosses the x‐axis gives the next approximation point . The tangent line at x1 is then found and the next approximation point lies at the intersection of this second tangent line f2(x) and the x‐axis. The method can be carried out in this manner by computing new tangent lines for each new approximation point, achieving a closer value to the true solution at each iteration until a desired tolerance is achieved.
[ Normal View | Magnified View ]

Browse by Topic

Algorithms and Computational Methods > Numerical Methods
Applications of Computational Statistics > Computational Mathematics

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts