Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- For the Instructor
- Part I Preliminaries
- 1 Computer Vision, Some Definitions, and Some History
- 2 Writing Programs to Process Images
- 3 Review of Mathematical Principles
- 4 Images: Representation and Creation
- Part II Preprocessing
- Part III Image Understanding
- Part IV The 2D Image in a 3D World
- A Support Vector Machines
- B How to Differentiate a Function Containing a Kernel Operator
- C The Image File System (IFS) Software
- Author Index
- Subject Index
- References
3 - Review of Mathematical Principles
from Part I - Preliminaries
Published online by Cambridge University Press: 25 October 2017
- Frontmatter
- Dedication
- Contents
- Preface
- For the Instructor
- Part I Preliminaries
- 1 Computer Vision, Some Definitions, and Some History
- 2 Writing Programs to Process Images
- 3 Review of Mathematical Principles
- 4 Images: Representation and Creation
- Part II Preprocessing
- Part III Image Understanding
- Part IV The 2D Image in a 3D World
- A Support Vector Machines
- B How to Differentiate a Function Containing a Kernel Operator
- C The Image File System (IFS) Software
- Author Index
- Subject Index
- References
Summary
Practical problems require good math.
– R. ChellappaIntroduction
This chapter is a review of several of the topics that are prerequisite for use of this book as a text. The student should have had an undergraduate calculus experience equivalent to about three semesters and some exposure to differential equations and partial differential equations. The student should have coursework containing concepts from probability and statistics, including prior probabilities, conditional probability, Bayes’ rule, and expectations. Finally, and very important, the student should have strong undergraduate-level training in linear algebra.
This chapter reviews and refreshes many of the concepts in those courses, but only as a review, not as a presentation of totally new material.
• (Section 3.2) We briefly review important concepts in linear algebra, including various vector and matrix operations, the derivative operators, eigendecomposition, and its relationship to singular value decomposition.
• (Section 3.3) Since almost all Computer Vision topics can be formulated as minimization problems, in this section, we briefly introduce function minimization, and discuss gradient descent and simulated annealing, the two minimization techniques that can lead to local and global minima, respectively.
• (Section 3.4) In Computer Vision, we are often interested in the probability of certain measurement occurring. In this section, we briefly review concepts like probability density functions and probability distribution functions.
A Brief Review of Linear Algebra
In this section, we very briefly review vector and matrix operations. Generally, we denote vectors in boldface lowercase, scalars in lowercase italic Roman, and matrices in uppercase Roman.
Vectors
Vectors are always considered to be column vectors. If we need to write one horizontally for the purpose of saving space in a document, we use transpose notation. For example, we denote a vector that consists of three scalar elements as:
The Inner Product
The inner product of two vectors is a scalar, c = xTy. Its value is the sum of products of the corresponding elements of the two vectors:
You will also sometimes see the notation <x,y> used for inner product. We do not like this because it looks like an expected value of a random variable. One sometimes also sees the “dot product” notation x · y for inner product.
- Type
- Chapter
- Information
- Fundamentals of Computer Vision , pp. 16 - 38Publisher: Cambridge University PressPrint publication year: 2017