Hadamard vector product

Given a vector a=axex+ayey+azez\vec{a} = a_x\vec{e}_x + a_y\vec{e}_y + a_z\vec{e}_z, and an other vector b\vec{b} we can define a component-wise (Hadamard) product c=ab\vec{c} = \vec{a} \star \vec{b}, by,

c=ab=axbxex+aybyey+azbzez.\vec{c} = \vec{a} \star \vec{b} = a_xb_x\vec{e}_x + a_yb_y\vec{e}_y + a_zb_z\vec{e}_z. Indeed, eiej={ei,for i=j0,for ij\vec{e}_i\star\vec{e}_j = \begin{cases} \vec{e}_i, & \text{for } i = j \\ 0, & \text{for } i \neq j \end{cases}

Euclidian vectors equipped with such a binanary operation would form an Abelian group and should be less controversial than the scalar/dot/inner product or vector/cross product. In order to investigate why it is not popular we look at its geometric properties:

  1. The “length” of the product is equal to the inner product: ab=ab\|\vec{a}\star\vec{b}\| = \vec{a} \cdot \vec{b}.
  2. The identity element breaks the symmetry between negative and positive directions …
  3. … Hence, the product does not retain its orientation under rotation.
  4. Its direction … ?

Lets see if we can get inspiration from some examples:

The direction of the product is not very intuitive.

Jargon in linear algebra

During my Bsc. studies, I found that the cryptic names of the various mathematical concepts made grasping them (as a Dutch person) harder than it should have been. As I am now teaching linear algebra, I wish to alleviate this problem by explaining the relation of these names with their associated concepts to the students. Here is a list of what I found.

  1. A womb can be viewed as a container for its contents, similar(?) to how a rectangular array of numbers can be contained in a algebraic object. As such, the Latin and (old) French word for womb is used to denote a Matrix1.

  2. When doing row reduction/Gaussian elimination, the row sweeping operation revolves around a non-zero entity that is selected as a reference to guide the row subtractions. An center of an axle that is a central entity around which stuff revolves is also called a pivot location.

  3. Some advantages to positioning military units on a battle field in a line or in a frontal formation may combined by placing them in as staggered, so-called echelon form. A similar stair-case like arrangements is made by the pivot locations of a matrix after Gaussian elimination has taken place. The word echelon in turn stems from a French word for ladder (“échelle”).

  4. The word homogeneous refers to things begin “the same”. The right-most column of the augmented matrix for the homogeneous equations is always the same (a column of zeros) and remains as such, even after elementary row operations have been applied.

  5. The central entity of an object may be called a kernel. Since every vector in a vector space is accompanied by a vector in the opposite direction, the “center” of mass of the range of a transformation (T:nmT: \mathbb{R}^n \rightarrow \mathbb{R}^m) may be said the be the zero vector (i.e.0mi.e. \mathbf{0} \in \mathbb{R}^m). As such we say that the kernel (Ker(T)n\text{Ker}(T) \in \mathbb{R}^n) is mapped there.

  6. One may carry something from one position to another. The displacement entity is then a carrier. This displacement is often denoted with a vector, the Latin word for carrier.

  7. An invertible matrix AA can be paired with its inverse A1A^{-1}. Note that AA is also the inverse of A1A^{-1}. A matrix that is not invertible, cannot be paired an therefore remains alone. We therefore say that this is a singular matrix.

  8. When a criminal is caught via the hits he/she left behind on the scene, these hints can be said to have made a trail that led to the perpetrator. Such a trail can also “follow” a matrix AA, of particular interest is the “similarity” transformation (CC) via an invertible matrix BB defined as C=B1ABC = B^{-1}AB. A prime example of such a matrix trail is therefore called the trace of a matrix, which does not change even tough generally CAC\neq A.

  9. The length (a\|\vec{a}\|) of a vector (a\vec{a}) can be expressed using the inner product (a2=aa\|\vec{a}\|^2 = \vec{a}\cdot\vec{a}). H. Grassmann was the first to study this product of a vector with itself before it was applied as a product of two different vectors. He therefore found this product was inward-looking/intovert, and named it as such (albeit in German).

  10. In a linear transformation, there may exist characteristic vectors that are only scaled by a certain characteristic value. These values may be found as the root of the characteristic polynomial. Since they are so characteristic, they belong to the transformation. A loose German (and Dutch) translation of something that “belongs” to you is “eigen”, giving rise to the eigenvectors and eigenvalues2.

  11. The singular value decomposition (SVD) of a matrix (AA) places singular values (σi\sigma_i) in a diagonal matrix. These values are such that ATAσi2IA^TA - \sigma_i^2I is a singular matrix (c.f. eigen values).


  1. In the movie “The Matrix”, people are trapped in a computer simulation called “The matrix”. In realitiy, these people are kept in a womb-like basin.↩︎

  2. The Dutch word “eigen” seems to have the same meaning as its German counterpart.↩︎

The marvelous design of this website is taken from Suckless.org