L1 norm
The L1 norm, also known as the Manhattan or taxicab norm, is a measure of vector magnitude that reflects the sum of the absolute values of the vector's components. In real-life terms, it’s akin to calculating the distance one would travel between two points in a city laid out in a grid pattern, moving parallel to the axes.
Let’s look at vector subtraction, which involves finding the difference between corresponding elements from two vectors. For vectors \(\mathbf{x}=(5,2,4)^{T}\) and \(\mathbf{y}=(3,3,2)^{T}\), their subtraction \(\mathbf{x}-\mathbf{y}=(2,-1,2)^{T}\). Applying the L1 norm, we sum the absolute values, resulting in \(\|\mathbf{x}-\mathbf{y}\|_{1} = |2|+|-1|+|2| = 5\), which represents the total 'distance' between the two vectors according to the L1 norm.
L2 norm
The L2 norm is also known as the Euclidean norm and is perhaps the most familiar. This norm measures the straight-line distance between two points in Euclidean space. To calculate it, you square each component of the vector, sum those squares, and take the square root of that sum.
For our example vectors \(\mathbf{x}\) and \(\mathbf{y}\), the L2 norm of their difference is given by the equation \(\|\mathbf{x}-\mathbf{y}\|_{2} = \sqrt{(2^2)+(-1^2)+(2^2)} = \sqrt{9} = 3\). By comparing the L2 norm with other norms, we can understand how the 'straight-line' distance relates to other notions of distance.
L infinity norm
The L infinity norm, also known as the maximum norm or Chebyshev norm, gives us the maximum absolute value among the components of a vector. It’s as if you’re looking for the largest deviation in any single direction.
In our ongoing example, when we subtract vectors \(\mathbf{x}\) and \(\mathbf{y}\), we determine the L infinity norm by finding the maximum absolute value in the difference vector \(\mathbf{x}-\mathbf{y}\). Here, that is \(\|\mathbf{x}-\mathbf{y}\|_{\infty} = \max\{|2|, |-1|, |2|\} = 2\). This tells us about the maximum extent to which the vectors differ in any dimension.
Vector subtraction
The process of vector subtraction is central to operations in linear algebra and can be visualized as physical displacement between two points. When we subtract one vector from another, we are essentially measuring the change or distance from one vector to the other in each dimension.
In practice, vector subtraction is straightforward: subtract each element of \(\mathbf{y}\) from the corresponding element in \(\mathbf{x}\), as shown in our example \(\mathbf{x}-\mathbf{y}=(5-3,2-3,4-2)=(2,-1,2)^{T}\). The resulting vector yields a new set of values that reflect the differences between the original vectors and is also the starting point for norm calculations.
Linear algebra applications
Understanding the roles of norms and operations like vector subtraction opens the door to a host of linear algebra applications. From computer graphics, where vectors represent coordinates and transformations, to machine learning algorithms, which often use vector norms for optimization—the utility of these mathematical tools is broad and profound.
Additionally, concepts like the L1 and L2 norms are not just theoretical; they have practical implications in areas like data normalization, error measurement, and regularization in statistics and physics. Through developing a solid grounding in these linear algebra foundations, students are better prepared for advanced and applied mathematics within various contemporary fields.