Parallel and Distributed Computation: Numerical Methods
Dimitri P. Bertsekas and John N. Tsitsiklis
This book was originally published by PrenticeHall in 1989, and republished by Athena Scientific in 1997 in paperback form, and in 2015 in hardcover. The paperback version was discontinued in mid2015. Both the paperback and hardcover versions can be purchased from Athena Scientific. The book can be freely downloaded in scanned form (735 pages, about 55 Megs). The problem solution manual may also be freely downloaded. Click here for a survey containing followup research, and for a related subsequent paper. For further discussions of asynchronous algorithms in specialized contexts based on material from this book, see the books Convex Optimization Algorithms, and Abstract Dynamic Programming.
 The book is a comprehensive and theoretically sound treatment of parallel and distributed numerical methods. It focuses on algorithms that are naturally suited for massive parallelization, and it explores the fundamental convergence, rate of convergence, communication, and synchronization issues associated with such algorithms.
Reviews :
“This major contribution to the literature belongs on the bookshelf of every scientist with an interest in computational science, directly beside Knuth’s three volumes and Numerical Recipes…”
Anna Nagurney, University of Massachusetts, in the International Journal of Supercomputer Applications
“This major work of exceptional scholarship summarizes more than three decades of research into generalpurpose algorithms for solving systems of equations and optimization problems.”
W. F. Smyth, in Computing Reviews
“This book marks an important landmark in the theory of distributed systems and I highly recommend it to students and practicing engineers in the fields of operations research and computer science, as well as to mathematicians interested in numerical methods.”
Lorne G. Mason, in IEEE Communications Magazine
Parallel and Distributed Computation: Numerical Methods
Table of Contents:
Introduction

 Parallel and distributed architectures
 The need for parallel and distributed computation
 Parallel computing systems and their classification
 Models, complexity measures, and some simple algorithms
 Models
 Complexity measures
 Examples: Vector, and matrix computations
 Parallelization of iterative methods
 Communication aspects of parallel and distributed systems
 Communication links
 Data link control
 Routing
 Network topologies
 Concurrency and communication tradeoffs
 Examples of matrixvector calculations
 Synchronization issues in parallel and distributed algorithms
 Synchronous algorithms
 Asynchronous algorithms and the reduction of the synchronization penalty
 Parallel and distributed architectures
PART I: SYNCHRONOUS ALGORITHMS
 Algorithms for Systems of Linear Equations and Matrix Inversion
 Parallel algorithms for linear systems with special structure
 Triangular matrices and back substitution
 Tridiagonal sytems and oddeven reduction
 Parallel direct methods for general linear equations
 Gaussian elimination
 Triangularization using Givens rotations
 A fast direct matrix inversion algorithm
 Classical iterative methods for systems of linear equations
 Parallel implementation of classical iterative methods
 An example: Poisson’s equation
 Multigrid methods
 Convergence analysis of classical iterative methods
 Background on maximum norms and nonnegative matrices
 Convergence analysis using maximum norms
 Convergence analysis using a quadratic cost function
 The Poisson equation revisited
 The conjugate gradient method
 Description of the algorithm
 Speed of convergence
 Preconditioned conjugate gradient method
 Parallel implementation
 Computation of the invariant distribution of a Markov chain
 Fast iterative matrix inversion
 Parallel algorithms for linear systems with special structure
Iterative Methods for Nonlinear Problems

 Contraction mappings
 General results
 Contractions over Cartesian product sets
 Some useful contraction mappings
 Unconstrained optimization
 The main algorithms
 Convergence analysis using the descent approach
 The case of a convex cost function
 Nonlinear algorithms
 Constrained convex optimization
 Optimality conditions and the projection theorem
 The gradient projection algorithm
 Scaled gradient projection algorithms
 The case of a product constraint set: parallel implementations
 Nonlinear agorithms
 Parallelization and decomposition of optimization problems
 Quadratic programming
 Separable stritly convex programming
 The proximal minimization algorithm
 Augmented Lagrangian methods
 Algorithms for variational inequalities
 Examples of variational inequality problems
 Preliminaries
 The projection algorithm
 Linearized algorithms
 The Cartesian product case: Parallel implementations
 Nonlinear algorithms
 Decomposition methods for variational inequalities
 Contraction mappings
Shortest Paths and Dynamic Programming

 The shortest path problem
 The BellmanFord algorithm
 Other parallel shortest path methods
 Markov chains with transition costs
 Markovian decision problems
 Discounted problems
 Undiscounted problems – Stochastic shortest paths
 Parallel implementation of the Dynamic Programming iteration
 The shortest path problem
Network Flow Problems

 The linear network flow problem and its dual
 The relaxation method
 Application to the shortest path problem
 Multiple node relaxation method
 The epsilonrelaxation method
 The auction algorithm for the assignment problem
 Parallel versions of the epsilonrelaxation and the auction algorithms
 Complexity analysis of the epsilonrelaxation method and its scaled version
 The scaled version of the algorithm
 Application to the assignment problem
 Network flow problems with strictly convex cost
 The relaxation method
 Convergence analysis
 The problem without arc flow bounds
 An example: Constrained matrix problems
 Parallel implementations of the relaxation method
 Nonlinear multicommodity flow problems – Routing applications
PART II: ASYNCHRONOUS ALGORITHMS
Totally Asynchronous Iterative Methods

 The totally asynchronous algorithmic model
 A general convergence theorem
 Applications to problems involving maximum norm contraction mappings
 Solution of linear systems of equations
 Unconstrained optimization
 Constrained optimization and variational inequalities
 Dynamic programming
 Convergence rate comparison of synchronous and asynchronous algorithms
 Applications to monotone mappings and the shortest path problem
 Linear network flow problems
 Nonlinear network flow problems
 Asynchronous relaxation for ordinary differential equations and two–point boundary value problems
 The asynchronous relaxation algorithm
 Twopoint boundary value problems
 The discrete time case
Partially Asynchronous Iterative Methods

 The partially asynchronous algorithmic model
 Algorithms for fixed points of nonexpansive mappings
 A convergence result
 Weakly diagonally dominant systems of linear equations
 Strictly convex network flow problems
 Algorithms for agreement and for Markov chain problems
 The agreement algorithm
 An asynchronous algorithm for the invariant distribution of a Markov chain
 Load balancing in a computer network
 Gradientlike optimization algorithms
 The algorithm and its convergence
 The role of the various parameters
 Block – iterative algorithms
 Gradient projection algorithms
 Distributed asynchronous routing in data networks
 Problem definition
 The algorihm and its convergence
 Discussion
 A model in which several processors may update the same variables
 Stochastic gradient algorithms
 Description of the algorithm and assumptions
 A convergence result
 Discussion and extensions
Organizing an Asynchronous Network of Processors for Distributed Computation
 Detecting termination of a distributed algorithm
 Snapshots
 Resource scheduling
 Synchronization using rollback: asynchronous simulation
 Maintaining communication with a center
APPENDIXES
 Appendix A: Linear algebra and analysis
 Appendix B: Graph theory
 Appendix C: Optimization and duality theory
 Appendix D: Probability theory and Markov chains