Vector Spaces and Subspaces
Definition. A vector space is a non-empty set V of objects called vectors on which are defined two operations called addition and multiplication by scalars subject to ten axioms.
For all vectors u,v and scalars c,d:
- If u,v are in V, then u+v is also in V.
- u+v=v+u.
- u+(v+w)=(u+v)+w.
- There is a zero vector 0 in V such that u+0=u.
- For each u in V, there is a vector −u in V such that u+(−u)=0.
- If u is in V and c is a scalar, cu is also in V.
- c(u+v)=cu+cv.
- (c+d)u=cu+du.
- c(du)=(cd)u.
- 1u=u.
Note: The scalars we will be working with are almost always real numbers, but they could also be complex numbers.
Also note that by this definition, the real numbers are considered to be a vector space, as are the complex numbers.
We mostly deal with finite-dimensional vector spaces – they look like Rn, where n is a positive nonzero integer. This means that the size of any basis in these vector spaces will be finite.
There are also infinite-dimensional vector spaces – these can’t be represented in an Rn-form. For example, the set of all continuous functions on the interval [0,1], denoted by C[0,1], can be considered a vector space with infinite dimensions. It fulfills the ten axioms:
- (f+g)(x)=f(x)+g(x)
- (cf)(x)=cf(x)
- Every function that is a sum of two functions continuous on [0,1] is also continuous on [0,1].
- f(x)=0 acts as the zero vector.
- etc…
The set of all functions on R which are infinitely differentiable, denoted as C∞(R), is also an infinite-dimensional vector space.
A third example of an infinite-dimensional vector space is the set of all sequences that are absolutely convergent.
- Sequences can be added together element-wise to produce new sequences.
- If (xn),(yn) are absolutely convergent, so is (xn+yn).
- ≤=n=1∑∞∣xn+yn∣n=1∑∞∣xn∣+∣yn∣n=1∑∞∣xn∣+n=1∑∞∣yn∣
- (xn+yn) is less than or equal to (xn)+(yn) which are both convergent, therefore it is also convergent.
- etc…
The reason it’s important to talk about linear transformations, even though so far we’ve just been able to look at matrices to do computations, is because there is no good analogue to matrices when dealing with infinite-dimensional vector spaces.
For example, integration is a linear transformation: T:C[0,1]→R, T(f)=∫01f(x)dx.
The derivative is also a linear transformation: T:C∞(R)→C∞(R), T(f)=dxdf(x).
Many important objects in math turn out to be vectors, and many important operations in math turn out to be linear transformations!