[T]he point of introducing L^p spaces in the first place is … to exploit … Banach space. For instance, if one has |ƒ − g| = 0, one would like to conclude that ƒ = g. But because of the equivalence class in the way, one can only conclude that ƒ is equal to g almost everywhere.
The Lebesgue philosophy is analogous to the “noise-tolerant” philosophy in modern signal progressing. If one is receiving a signal (e.g. a television signal) from a noisy source (e.g. a television station in the presence of electrical interference), then any individual component of that signal (e.g. a pixel of the television image) may be corrupted. But as long as the total number of corrupted data points is negligible, one can still get a good enough idea of the image to do things like distinguish foreground from background, compute the area of an object, or the mean intensity, etc.
But abstract mathematics opens up more possibilities.
- Like TV signals. Like 2-D images or 2-D × time video clips.
- Like crime patterns, dinosaur paw prints, neuronal spike-trains, forged signatures, songs (1-D × time), trajectories, landscapes.
- Like, any complete, normed, vector space. (= it’s thick + distance exists + addition exists + everything’s included = it’s a Banach space)