doc-next-gen

Mathlib.Data.Matrix.Basic

Module docstring

{"# Matrices

This file contains basic results on matrices including bundled versions of matrix operators.

Implementation notes

For convenience, Matrix m n α is defined as m → n → α, as this allows elements of the matrix to be accessed with A i j. However, it is not advisable to construct matrices using terms of the form fun i j ↦ _ or even (fun i j ↦ _ : Matrix m n α), as these are not recognized by Lean as having the right type. Instead, Matrix.of should be used.

TODO

Under various conditions, multiplication of infinite matrices makes sense. These have not yet been implemented. ","### Bundled versions of Matrix.map "}

Matrix.decidableEq instance
[DecidableEq α] [Fintype m] [Fintype n] : DecidableEq (Matrix m n α)
Full source
instance decidableEq [DecidableEq α] [Fintype m] [Fintype n] : DecidableEq (Matrix m n α) :=
  Fintype.decidablePiFintype
Decidable Equality for Matrices over Finite Types
Informal description
For any type $\alpha$ with decidable equality and finite types $m$ and $n$, the equality of matrices in $\text{Matrix}\, m\, n\, \alpha$ is decidable.
Matrix.instFintypeOfDecidableEq instance
{n m} [Fintype m] [DecidableEq m] [Fintype n] [DecidableEq n] (α) [Fintype α] : Fintype (Matrix m n α)
Full source
instance {n m} [Fintype m] [DecidableEq m] [Fintype n] [DecidableEq n] (α) [Fintype α] :
    Fintype (Matrix m n α) := inferInstanceAs (Fintype (m → n → α))
Finiteness of Matrices over Finite Types
Informal description
For finite types $m$ and $n$ with decidable equality and a finite type $\alpha$, the type of matrices $\text{Matrix}\, m\, n\, \alpha$ is finite.
Matrix.instFinite instance
{n m} [Finite m] [Finite n] (α) [Finite α] : Finite (Matrix m n α)
Full source
instance {n m} [Finite m] [Finite n] (α) [Finite α] :
    Finite (Matrix m n α) := inferInstanceAs (Finite (m → n → α))
Finiteness of Matrix Types over Finite Indices and Entries
Informal description
For any finite types $m$ and $n$ and a finite type $\alpha$, the type of matrices $\mathrm{Matrix}\, m\, n\, \alpha$ is finite.
Matrix.ofLinearEquiv definition
[Semiring R] [AddCommMonoid α] [Module R α] : (m → n → α) ≃ₗ[R] Matrix m n α
Full source
/-- This is `Matrix.of` bundled as a linear equivalence. -/
def ofLinearEquiv [Semiring R] [AddCommMonoid α] [Module R α] : (m → n → α) ≃ₗ[R] Matrix m n α where
  __ := ofAddEquiv
  map_smul' _ _ := rfl
Linear equivalence between functions and matrices
Informal description
The linear equivalence `Matrix.ofLinearEquiv` between the space of functions `m → n → α` and the space of matrices `Matrix m n α`, which preserves both the additive structure and scalar multiplication. Specifically, for any scalar `r` in the semiring `R` and any function `f : m → n → α`, the equivalence satisfies `ofLinearEquiv (r • f) = r • ofLinearEquiv f`.
Matrix.coe_ofLinearEquiv theorem
[Semiring R] [AddCommMonoid α] [Module R α] : ⇑(ofLinearEquiv _ : (m → n → α) ≃ₗ[R] Matrix m n α) = of
Full source
@[simp] lemma coe_ofLinearEquiv [Semiring R] [AddCommMonoid α] [Module R α] :
    ⇑(ofLinearEquiv _ : (m → n → α) ≃ₗ[R] Matrix m n α) = of := rfl
Underlying Function of Matrix Linear Equivalence Equals Matrix Construction Function
Informal description
Let $R$ be a semiring, $\alpha$ an additive commutative monoid with a module structure over $R$, and $m$, $n$ types. The linear equivalence `ofLinearEquiv` between the space of functions $m \to n \to \alpha$ and the space of matrices $\mathrm{Matrix}\, m\, n\, \alpha$ satisfies that its underlying function is equal to the matrix construction function `Matrix.of`.
Matrix.coe_ofLinearEquiv_symm theorem
[Semiring R] [AddCommMonoid α] [Module R α] : ⇑((ofLinearEquiv _).symm : Matrix m n α ≃ₗ[R] (m → n → α)) = of.symm
Full source
@[simp] lemma coe_ofLinearEquiv_symm [Semiring R] [AddCommMonoid α] [Module R α] :
    ⇑((ofLinearEquiv _).symm : MatrixMatrix m n α ≃ₗ[R] (m → n → α)) = of.symm := rfl
Inverse of Linear Equivalence Between Function Space and Matrix Space
Informal description
Let $R$ be a semiring, $\alpha$ an additive commutative monoid with a module structure over $R$, and $m$, $n$ types. The inverse of the linear equivalence $\text{Matrix.ofLinearEquiv}$ between the space of functions $m \to n \to \alpha$ and the space of matrices $\text{Matrix}\, m\, n\, \alpha$ is equal to the inverse of the equivalence $\text{Matrix.of}$.
Matrix.sum_apply theorem
[AddCommMonoid α] (i : m) (j : n) (s : Finset β) (g : β → Matrix m n α) : (∑ c ∈ s, g c) i j = ∑ c ∈ s, g c i j
Full source
theorem sum_apply [AddCommMonoid α] (i : m) (j : n) (s : Finset β) (g : β → Matrix m n α) :
    (∑ c ∈ s, g c) i j = ∑ c ∈ s, g c i j :=
  (congr_fun (s.sum_apply i g) j).trans (s.sum_apply j _)
Entrywise Summation of Matrices
Informal description
Let $\alpha$ be an additive commutative monoid, $m$ and $n$ be types, and $s$ be a finite set of indices of type $\beta$. For any family of matrices $g : \beta \to \mathrm{Matrix}\, m\, n\, \alpha$, the $(i,j)$-th entry of the sum $\sum_{c \in s} g_c$ equals the sum $\sum_{c \in s} (g_c)_{i,j}$ of the $(i,j)$-th entries of the matrices $g_c$.
Matrix.diagonalAddMonoidHom definition
[AddZeroClass α] : (n → α) →+ Matrix n n α
Full source
/-- `Matrix.diagonal` as an `AddMonoidHom`. -/
@[simps]
def diagonalAddMonoidHom [AddZeroClass α] : (n → α) →+ Matrix n n α where
  toFun := diagonal
  map_zero' := diagonal_zero
  map_add' x y := (diagonal_add x y).symm
Diagonal matrix construction as an additive monoid homomorphism
Informal description
The function `Matrix.diagonalAddMonoidHom` maps a vector $d : n \to \alpha$ to the diagonal matrix with entries $d_i$ on the diagonal and zeros elsewhere, where $\alpha$ is an additive monoid. This mapping is an additive monoid homomorphism, meaning it preserves addition and the zero vector: 1. The zero vector is mapped to the zero matrix 2. The sum of two vectors is mapped to the sum of their corresponding diagonal matrices
Matrix.diagonalLinearMap definition
[Semiring R] [AddCommMonoid α] [Module R α] : (n → α) →ₗ[R] Matrix n n α
Full source
/-- `Matrix.diagonal` as a `LinearMap`. -/
@[simps]
def diagonalLinearMap [Semiring R] [AddCommMonoid α] [Module R α] : (n → α) →ₗ[R] Matrix n n α :=
  { diagonalAddMonoidHom n α with map_smul' := diagonal_smul }
Diagonal matrix construction as a linear map
Informal description
Given a semiring $R$ and an additive commutative monoid $\alpha$ equipped with an $R$-module structure, the function maps a vector $d : n \to \alpha$ to the diagonal matrix with entries $d_i$ on the diagonal and zeros elsewhere. This mapping is an $R$-linear map, meaning it preserves addition, scalar multiplication, and the zero vector: 1. The zero vector is mapped to the zero matrix. 2. The sum of two vectors is mapped to the sum of their corresponding diagonal matrices. 3. For any scalar $r \in R$ and vector $d$, the diagonal matrix of $r \cdot d$ is $r$ times the diagonal matrix of $d$.
Matrix.zero_le_one_elem theorem
[Preorder α] [ZeroLEOneClass α] (i j : n) : 0 ≤ (1 : Matrix n n α) i j
Full source
lemma zero_le_one_elem [Preorder α] [ZeroLEOneClass α] (i j : n) :
    0 ≤ (1 : Matrix n n α) i j := by
  by_cases hi : i = j
  · subst hi
    simp
  · simp [hi]
Nonnegativity of Identity Matrix Entries in Ordered Semirings
Informal description
For any type $\alpha$ with a preorder and a `ZeroLEOneClass` instance, and for any indices $i, j$ of a square matrix of size $n \times n$ over $\alpha$, the $(i,j)$-th entry of the identity matrix satisfies $0 \leq (1 : \text{Matrix } n n \alpha)_{i,j}$.
Matrix.zero_le_one_row theorem
[Preorder α] [ZeroLEOneClass α] (i : n) : 0 ≤ (1 : Matrix n n α) i
Full source
lemma zero_le_one_row [Preorder α] [ZeroLEOneClass α] (i : n) :
    0 ≤ (1 : Matrix n n α) i :=
  zero_le_one_elem i
Nonnegativity of Identity Matrix Rows in Ordered Semirings
Informal description
For any type $\alpha$ with a preorder and a `ZeroLEOneClass` instance, and for any row index $i$ of a square matrix of size $n \times n$ over $\alpha$, the $i$-th row of the identity matrix satisfies $0 \leq (1 : \text{Matrix } n n \alpha)_i$ (where the inequality is interpreted pointwise for each entry in the row).
Matrix.diagAddMonoidHom definition
[AddZeroClass α] : Matrix n n α →+ n → α
Full source
/-- `Matrix.diag` as an `AddMonoidHom`. -/
@[simps]
def diagAddMonoidHom [AddZeroClass α] : MatrixMatrix n n α →+ n → α where
  toFun := diag
  map_zero' := diag_zero
  map_add' := diag_add
Diagonal extraction as an additive monoid homomorphism
Informal description
The function that maps a square matrix $A$ of size $n \times n$ over a type $\alpha$ with an additive zero class structure to its diagonal, viewed as an additive monoid homomorphism. More precisely, this is the bundled version of the diagonal extraction function $\text{diag} : \text{Matrix}\ n\ n\ \alpha \to n \to \alpha$, which satisfies: 1. $\text{diag}(0) = 0$ (preservation of zero matrix) 2. $\text{diag}(A + B) = \text{diag}(A) + \text{diag}(B)$ for all matrices $A, B$ (preservation of addition)
Matrix.diagLinearMap definition
[Semiring R] [AddCommMonoid α] [Module R α] : Matrix n n α →ₗ[R] n → α
Full source
/-- `Matrix.diag` as a `LinearMap`. -/
@[simps]
def diagLinearMap [Semiring R] [AddCommMonoid α] [Module R α] : MatrixMatrix n n α →ₗ[R] n → α :=
  { diagAddMonoidHom n α with map_smul' := diag_smul }
Diagonal extraction as a linear map
Informal description
The linear map version of the diagonal extraction function, which maps a square matrix $A$ of size $n \times n$ over a module $\alpha$ to its diagonal entries. More precisely, given a semiring $R$ and an $R$-module $\alpha$, this is the $R$-linear map $\text{diag} : \text{Matrix}\ n\ n\ \alpha \to n \to \alpha$ that satisfies: 1. $\text{diag}(A + B) = \text{diag}(A) + \text{diag}(B)$ for all matrices $A, B$ (additivity) 2. $\text{diag}(r \cdot A) = r \cdot \text{diag}(A)$ for all $r \in R$ and matrices $A$ (homogeneity)
Matrix.diag_list_sum theorem
[AddMonoid α] (l : List (Matrix n n α)) : diag l.sum = (l.map diag).sum
Full source
@[simp]
theorem diag_list_sum [AddMonoid α] (l : List (Matrix n n α)) : diag l.sum = (l.map diag).sum :=
  map_list_sum (diagAddMonoidHom n α) l
Diagonal of Sum of Matrices Equals Sum of Diagonals for Lists
Informal description
For any additive monoid $\alpha$ and a list $l$ of square matrices of size $n \times n$ over $\alpha$, the diagonal of the sum of matrices in $l$ is equal to the sum of the diagonals of the matrices in $l$. That is, \[ \mathrm{diag}\left(\sum_{A \in l} A\right) = \sum_{A \in l} \mathrm{diag}(A). \]
Matrix.diag_multiset_sum theorem
[AddCommMonoid α] (s : Multiset (Matrix n n α)) : diag s.sum = (s.map diag).sum
Full source
@[simp]
theorem diag_multiset_sum [AddCommMonoid α] (s : Multiset (Matrix n n α)) :
    diag s.sum = (s.map diag).sum :=
  map_multiset_sum (diagAddMonoidHom n α) s
Diagonal Extraction Preserves Multiset Summation of Matrices
Informal description
Let $\alpha$ be an additive commutative monoid and let $s$ be a multiset of $n \times n$ matrices over $\alpha$. Then the diagonal of the sum of all matrices in $s$ is equal to the sum of the diagonals of each matrix in $s$. In symbols: \[ \text{diag}\left(\sum_{A \in s} A\right) = \sum_{A \in s} \text{diag}(A) \]
Matrix.diag_sum theorem
{ι} [AddCommMonoid α] (s : Finset ι) (f : ι → Matrix n n α) : diag (∑ i ∈ s, f i) = ∑ i ∈ s, diag (f i)
Full source
@[simp]
theorem diag_sum {ι} [AddCommMonoid α] (s : Finset ι) (f : ι → Matrix n n α) :
    diag (∑ i ∈ s, f i) = ∑ i ∈ s, diag (f i) :=
  map_sum (diagAddMonoidHom n α) f s
Diagonal of a Sum of Matrices Equals Sum of Diagonals
Informal description
Let $\alpha$ be an additive commutative monoid, $n$ a type, $\iota$ an index type, $s$ a finite subset of $\iota$, and $f : \iota \to \mathrm{Matrix}\,n\,n\,\alpha$ a family of square matrices. Then the diagonal of the sum of matrices $\sum_{i \in s} f(i)$ is equal to the sum of their diagonals, i.e., $$ \mathrm{diag}\left(\sum_{i \in s} f(i)\right) = \sum_{i \in s} \mathrm{diag}(f(i)). $$
Matrix.diagonalRingHom definition
[Fintype n] [DecidableEq n] : (n → α) →+* Matrix n n α
Full source
/-- `Matrix.diagonal` as a `RingHom`. -/
@[simps]
def diagonalRingHom [Fintype n] [DecidableEq n] : (n → α) →+* Matrix n n α :=
  { diagonalAddMonoidHom n α with
    toFun := diagonal
    map_one' := diagonal_one
    map_mul' := fun _ _ => (diagonal_mul_diagonal' _ _).symm }
Diagonal matrix construction as a semiring homomorphism
Informal description
Given a finite type `n` with decidable equality and a semiring `α`, the function `Matrix.diagonalRingHom` maps a vector `d : n → α` to the diagonal matrix with entries `d i` on the diagonal and zeros elsewhere. This mapping is a semiring homomorphism, meaning it preserves addition, multiplication, and the multiplicative identity: 1. The zero vector is mapped to the zero matrix 2. The vector with all entries equal to 1 is mapped to the identity matrix 3. The sum (resp. product) of two vectors is mapped to the sum (resp. product) of their corresponding diagonal matrices
Matrix.diagonal_pow theorem
[Fintype n] [DecidableEq n] (v : n → α) (k : ℕ) : diagonal v ^ k = diagonal (v ^ k)
Full source
theorem diagonal_pow [Fintype n] [DecidableEq n] (v : n → α) (k : ) :
    diagonal v ^ k = diagonal (v ^ k) :=
  (map_pow (diagonalRingHom n α) v k).symm
Power of Diagonal Matrix Equals Diagonal of Powers: $(\mathrm{diag}(v))^k = \mathrm{diag}(v^k)$
Informal description
Let $n$ be a finite type with decidable equality and $\alpha$ be a type. For any vector $v : n \to \alpha$ and natural number $k$, the $k$-th power of the diagonal matrix constructed from $v$ is equal to the diagonal matrix constructed from the $k$-th power of $v$, i.e., $$ (\mathrm{diag}(v))^k = \mathrm{diag}(v^k). $$
Matrix.scalar definition
(n : Type u) [DecidableEq n] [Fintype n] : α →+* Matrix n n α
Full source
/-- The ring homomorphism `α →+* Matrix n n α`
sending `a` to the diagonal matrix with `a` on the diagonal.
-/
def scalar (n : Type u) [DecidableEq n] [Fintype n] : α →+* Matrix n n α :=
  (diagonalRingHom n α).comp <| Pi.constRingHom n α
Scalar matrix construction as a semiring homomorphism
Informal description
Given a finite type $n$ with decidable equality and a semiring $\alpha$, the function $\text{scalar}$ maps a scalar $a \in \alpha$ to the diagonal matrix with $a$ on every diagonal entry and zeros elsewhere. This mapping is a semiring homomorphism, meaning it preserves addition, multiplication, and the multiplicative identity: 1. The scalar $0$ is mapped to the zero matrix 2. The scalar $1$ is mapped to the identity matrix 3. The sum (resp. product) of two scalars is mapped to the sum (resp. product) of their corresponding scalar matrices
Matrix.scalar_apply theorem
(a : α) : scalar n a = diagonal fun _ => a
Full source
@[simp]
theorem scalar_apply (a : α) : scalar n a = diagonal fun _ => a :=
  rfl
Scalar Matrix as Diagonal Matrix with Constant Entries
Informal description
For any scalar $a$ in a type $\alpha$, the scalar matrix $\text{scalar}_n(a)$ (a diagonal matrix with $a$ on all diagonal entries) is equal to the diagonal matrix constructed from the constant function $\lambda \_, a$.
Matrix.scalar_inj theorem
[Nonempty n] {r s : α} : scalar n r = scalar n s ↔ r = s
Full source
theorem scalar_inj [Nonempty n] {r s : α} : scalarscalar n r = scalar n s ↔ r = s :=
  (diagonal_injective.comp Function.const_injective).eq_iff
Injectivity of Scalar Matrix Construction: $\text{scalar}_n(r) = \text{scalar}_n(s) \leftrightarrow r = s$
Informal description
For a nonempty finite type $n$ with decidable equality and any two scalars $r, s$ in a semiring $\alpha$, the scalar matrices $\text{scalar}_n(r)$ and $\text{scalar}_n(s)$ are equal if and only if $r = s$.
Matrix.scalar_commute_iff theorem
{r : α} {M : Matrix n n α} : Commute (scalar n r) M ↔ r • M = MulOpposite.op r • M
Full source
theorem scalar_commute_iff {r : α} {M : Matrix n n α} :
    CommuteCommute (scalar n r) M ↔ r • M = MulOpposite.op r • M := by
  simp_rw [Commute, SemiconjBy, scalar_apply, ← smul_eq_diagonal_mul, ← op_smul_eq_mul_diagonal]
Commutation Criterion for Scalar Matrices: $(\text{scalar}_n(r))M = M(\text{scalar}_n(r)) \leftrightarrow r \cdot M = \text{op}(r) \cdot M$
Informal description
Let $r$ be an element of a type $\alpha$ and $M$ be a square matrix over $\alpha$ with indices in a finite type $n$. The scalar matrix $\text{scalar}_n(r)$ commutes with $M$ if and only if the scalar multiplication $r \cdot M$ equals the scalar multiplication $\text{op}(r) \cdot M$ in the multiplicative opposite algebra.
Matrix.scalar_commute theorem
(r : α) (hr : ∀ r', Commute r r') (M : Matrix n n α) : Commute (scalar n r) M
Full source
theorem scalar_commute (r : α) (hr : ∀ r', Commute r r') (M : Matrix n n α) :
    Commute (scalar n r) M := scalar_commute_iff.2 <| ext fun _ _ => hr _
Commutation of Scalar Matrices with All Matrices When Scalar Commutes Universally
Informal description
Let $r$ be an element of a type $\alpha$ such that $r$ commutes with every element $r'$ in $\alpha$ (i.e., $r * r' = r' * r$ for all $r' \in \alpha$). Then for any square matrix $M$ over $\alpha$ with indices in a finite type $n$, the scalar matrix $\text{scalar}_n(r)$ commutes with $M$.
Matrix.instAlgebra instance
: Algebra R (Matrix n n α)
Full source
instance instAlgebra : Algebra R (Matrix n n α) where
  algebraMap := (Matrix.scalar n).comp (algebraMap R α)
  commutes' _ _ := scalar_commute _ (fun _ => Algebra.commutes _ _) _
  smul_def' r x := by ext; simp [Matrix.scalar, Algebra.smul_def r]
Algebra Structure on Square Matrices
Informal description
For any finite type $n$ with decidable equality and any algebra $R$ over a semiring $\alpha$, the space of square matrices $\text{Matrix}\, n\, n\, \alpha$ inherits an algebra structure from $R$. The algebra map sends an element $r \in R$ to the diagonal matrix with $\text{algebraMap}\, R\, \alpha\, r$ on the diagonal and zeros elsewhere.
Matrix.algebraMap_matrix_apply theorem
{r : R} {i j : n} : algebraMap R (Matrix n n α) r i j = if i = j then algebraMap R α r else 0
Full source
theorem algebraMap_matrix_apply {r : R} {i j : n} :
    algebraMap R (Matrix n n α) r i j = if i = j then algebraMap R α r else 0 := by
  dsimp [algebraMap, Algebra.algebraMap, Matrix.scalar]
  split_ifs with h <;> simp [h, Matrix.one_apply_ne]
Algebra Map Entry Formula for Matrices: $(\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha)\, r)_{i,j}$
Informal description
For any algebra $R$ over a semiring $\alpha$, finite type $n$ with decidable equality, and element $r \in R$, the $(i,j)$-th entry of the algebra map $\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha)\, r$ is given by: \[ (\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha)\, r)_{i,j} = \begin{cases} \text{algebraMap}\, R\, \alpha\, r & \text{if } i = j, \\ 0 & \text{otherwise.} \end{cases} \]
Matrix.algebraMap_eq_diagonal theorem
(r : R) : algebraMap R (Matrix n n α) r = diagonal (algebraMap R (n → α) r)
Full source
theorem algebraMap_eq_diagonal (r : R) :
    algebraMap R (Matrix n n α) r = diagonal (algebraMap R (n → α) r) := rfl
Algebra Map to Diagonal Matrix Correspondence
Informal description
For any element $r$ in a commutative ring $R$, the algebra map from $R$ to the ring of $n \times n$ matrices over a semiring $\alpha$ sends $r$ to the diagonal matrix whose diagonal entries are all equal to the image of $r$ under the algebra map from $R$ to $\alpha$. In other words, the matrix $\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha)\, r$ is equal to the diagonal matrix $\text{diagonal}\, (\text{algebraMap}\, R\, (n \to \alpha)\, r)$.
Matrix.algebraMap_eq_diagonalRingHom theorem
: algebraMap R (Matrix n n α) = (diagonalRingHom n α).comp (algebraMap R _)
Full source
theorem algebraMap_eq_diagonalRingHom :
    algebraMap R (Matrix n n α) = (diagonalRingHom n α).comp (algebraMap R _) := rfl
Algebra Map Factorization through Diagonal Matrices
Informal description
For any commutative ring $R$ and semiring $\alpha$, the algebra map from $R$ to the ring of $n \times n$ matrices over $\alpha$ is equal to the composition of the algebra map from $R$ to the space of diagonal vectors $n \to \alpha$ with the diagonal matrix construction homomorphism. That is, the following diagram commutes: \[ \text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha) = \text{diagonalRingHom}\, n\, \alpha \circ \text{algebraMap}\, R\, (n \to \alpha) \]
Matrix.map_algebraMap theorem
(r : R) (f : α → β) (hf : f 0 = 0) (hf₂ : f (algebraMap R α r) = algebraMap R β r) : (algebraMap R (Matrix n n α) r).map f = algebraMap R (Matrix n n β) r
Full source
@[simp]
theorem map_algebraMap (r : R) (f : α → β) (hf : f 0 = 0)
    (hf₂ : f (algebraMap R α r) = algebraMap R β r) :
    (algebraMap R (Matrix n n α) r).map f = algebraMap R (Matrix n n β) r := by
  rw [algebraMap_eq_diagonal, algebraMap_eq_diagonal, diagonal_map hf]
  simp [hf₂]
Preservation of Scalar Matrices under Entry-wise Map
Informal description
Let $R$ be a commutative ring, and let $\alpha$ and $\beta$ be semirings with algebra structures over $R$. Given a function $f : \alpha \to \beta$ such that $f(0) = 0$ and $f$ preserves the algebra map (i.e., $f(\text{algebraMap}\, R\, \alpha\, r) = \text{algebraMap}\, R\, \beta\, r$ for any $r \in R$), then applying $f$ entry-wise to the scalar matrix $\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha)\, r$ yields the scalar matrix $\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \beta)\, r$. In other words, $$(\text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \alpha)\, r).\text{map}\, f = \text{algebraMap}\, R\, (\text{Matrix}\, n\, n\, \beta)\, r.$$
Matrix.diagonalAlgHom definition
: (n → α) →ₐ[R] Matrix n n α
Full source
/-- `Matrix.diagonal` as an `AlgHom`. -/
@[simps]
def diagonalAlgHom : (n → α) →ₐ[R] Matrix n n α :=
  { diagonalRingHom n α with
    toFun := diagonal
    commutes' := fun r => (algebraMap_eq_diagonal r).symm }
Diagonal matrix construction as an algebra homomorphism
Informal description
Given a finite type `n` with decidable equality and an algebra `R` over a semiring `α`, the function `diagonalAlgHom` maps a vector `d : n → α` to the diagonal matrix with entries `d i` on the diagonal and zeros elsewhere. This mapping is an algebra homomorphism, meaning it preserves addition, scalar multiplication, multiplication, and the multiplicative identity: 1. The zero vector is mapped to the zero matrix 2. The vector with all entries equal to 1 is mapped to the identity matrix 3. The sum (resp. product) of two vectors is mapped to the sum (resp. product) of their corresponding diagonal matrices 4. The scalar multiplication by an element `r ∈ R` is mapped to the scalar multiplication of the corresponding diagonal matrix by `r`.
Matrix.entryAddHom definition
(i : m) (j : n) : AddHom (Matrix m n α) α
Full source
/-- Extracting entries from a matrix as an additive homomorphism. -/
@[simps]
def entryAddHom (i : m) (j : n) : AddHom (Matrix m n α) α where
  toFun M := M i j
  map_add' _ _ := rfl
Matrix entry extraction as an additive homomorphism
Informal description
For given row index $i$ and column index $j$, the function maps a matrix $M$ to its $(i,j)$-th entry $M_{i,j}$, and this mapping preserves addition. That is, for any two matrices $A$ and $B$, the $(i,j)$-th entry of $A + B$ is equal to the sum of the $(i,j)$-th entries of $A$ and $B$.
Matrix.entryAddHom_eq_comp theorem
{i : m} {j : n} : entryAddHom α i j = ((Pi.evalAddHom (fun _ => α) j).comp (Pi.evalAddHom _ i)).comp (AddHomClass.toAddHom ofAddEquiv.symm)
Full source
lemma entryAddHom_eq_comp {i : m} {j : n} :
    entryAddHom α i j =
      ((Pi.evalAddHom (fun _ => α) j).comp (Pi.evalAddHom _ i)).comp
        (AddHomClass.toAddHom ofAddEquiv.symm) :=
  rfl
Decomposition of Matrix Entry Extraction as Composition of Evaluation Homomorphisms
Informal description
For any row index $i \in m$ and column index $j \in n$, the additive homomorphism that extracts the $(i,j)$-th entry of a matrix is equal to the composition of the following additive homomorphisms: 1. The additive homomorphism that evaluates a function at index $i$ (from the type of matrices to the type of row vectors), 2. The additive homomorphism that evaluates a row vector at index $j$ (from the type of row vectors to the entry type $\alpha$), and 3. The additive homomorphism corresponding to the inverse of the additive equivalence between functions and matrices. In symbols, this can be written as: \[ \text{entryAddHom}_{\alpha}\, i\, j = \left(\text{evalAddHom}_{(\lambda \_, \alpha)}\, j \circ \text{evalAddHom}_{(\lambda \_, \text{row})}\, i\right) \circ \text{toAddHom}(\text{ofAddEquiv}^{-1}) \] where $\text{row}$ represents the type of row vectors (functions $n \to \alpha$).
Matrix.entryAddMonoidHom definition
(i : m) (j : n) : Matrix m n α →+ α
Full source
/--
Extracting entries from a matrix as an additive monoid homomorphism. Note this cannot be upgraded to
a ring homomorphism, as it does not respect multiplication.
-/
@[simps]
def entryAddMonoidHom (i : m) (j : n) : MatrixMatrix m n α →+ α where
  toFun M := M i j
  map_add' _ _ := rfl
  map_zero' := rfl
Matrix entry extraction as an additive monoid homomorphism
Informal description
For given row index $i$ and column index $j$, the function maps a matrix $M$ to its $(i,j)$-th entry $M_{i,j}$, and this mapping preserves addition and the zero element. That is: 1. For any two matrices $A$ and $B$, the $(i,j)$-th entry of $A + B$ is equal to the sum of the $(i,j)$-th entries of $A$ and $B$. 2. The $(i,j)$-th entry of the zero matrix is equal to the zero element of the underlying type. This is an additive monoid homomorphism from the additive monoid of matrices to the underlying additive monoid of matrix entries.
Matrix.entryAddMonoidHom_eq_comp theorem
{i : m} {j : n} : entryAddMonoidHom α i j = ((Pi.evalAddMonoidHom (fun _ => α) j).comp (Pi.evalAddMonoidHom _ i)).comp (AddMonoidHomClass.toAddMonoidHom ofAddEquiv.symm)
Full source
lemma entryAddMonoidHom_eq_comp {i : m} {j : n} :
    entryAddMonoidHom α i j =
      ((Pi.evalAddMonoidHom (fun _ => α) j).comp (Pi.evalAddMonoidHom _ i)).comp
        (AddMonoidHomClass.toAddMonoidHom ofAddEquiv.symm) := by
  rfl
Decomposition of Matrix Entry Extraction as Composition of Evaluation Homomorphisms
Informal description
For any row index $i \in m$ and column index $j \in n$, the additive monoid homomorphism that extracts the $(i,j)$-th entry of a matrix $M \in \mathrm{Matrix}\, m\, n\, \alpha$ can be expressed as the composition of three additive monoid homomorphisms: 1. The inverse of the additive equivalence between functions and matrices, 2. The evaluation homomorphism at row $i$, 3. The evaluation homomorphism at column $j$. That is, the entry extraction homomorphism $\mathrm{entryAddMonoidHom}_{\alpha}\, i\, j$ satisfies: $$ \mathrm{entryAddMonoidHom}_{\alpha}\, i\, j = \left(\mathrm{evalAddMonoidHom}_{(\cdot \to \alpha)}\, j\right) \circ \left(\mathrm{evalAddMonoidHom}_{(m \to n \to \alpha)}\, i\right) \circ \mathrm{ofAddEquiv}^{-1} $$ where $\circ$ denotes composition of homomorphisms.
Matrix.evalAddMonoidHom_comp_diagAddMonoidHom theorem
(i : m) : (Pi.evalAddMonoidHom _ i).comp (diagAddMonoidHom m α) = entryAddMonoidHom α i i
Full source
@[simp] lemma evalAddMonoidHom_comp_diagAddMonoidHom (i : m) :
    (Pi.evalAddMonoidHom _ i).comp (diagAddMonoidHom m α) = entryAddMonoidHom α i i := by
  simp [AddMonoidHom.ext_iff]
Composition of Diagonal and Evaluation Homomorphisms Equals Entry Homomorphism
Informal description
For any index $i$ of a square matrix, the composition of the evaluation additive monoid homomorphism at $i$ with the diagonal extraction additive monoid homomorphism is equal to the additive monoid homomorphism that extracts the $(i,i)$-th entry of the matrix. More precisely, let $\text{diag} : \text{Matrix}\ m\ m\ \alpha \to m \to \alpha$ be the diagonal extraction function, and let $\text{eval}_i : (m \to \alpha) \to \alpha$ be the evaluation at index $i$. Then: $$ \text{eval}_i \circ \text{diag} = \text{entry}_{i,i} $$ where $\text{entry}_{i,i}$ is the function that extracts the $(i,i)$-th entry of a matrix.
Matrix.entryAddMonoidHom_toAddHom theorem
{i : m} {j : n} : (entryAddMonoidHom α i j : AddHom _ _) = entryAddHom α i j
Full source
@[simp] lemma entryAddMonoidHom_toAddHom {i : m} {j : n} :
  (entryAddMonoidHom α i j : AddHom _ _) = entryAddHom α i j := rfl
Equality of Matrix Entry Extraction as Additive Homomorphisms
Informal description
For any row index $i$ and column index $j$, the additive monoid homomorphism that extracts the $(i,j)$-th entry of a matrix is equal (as an additive homomorphism) to the additive homomorphism that extracts the $(i,j)$-th entry.
Matrix.entryLinearMap definition
(i : m) (j : n) : Matrix m n α →ₗ[R] α
Full source
/--
Extracting entries from a matrix as a linear map. Note this cannot be upgraded to an algebra
homomorphism, as it does not respect multiplication.
-/
@[simps]
def entryLinearMap (i : m) (j : n) :
    MatrixMatrix m n α →ₗ[R] α where
  toFun M := M i j
  map_add' _ _ := rfl
  map_smul' _ _ := rfl
Matrix entry extraction linear map
Informal description
For any row index $i$ and column index $j$, the linear map that extracts the $(i,j)$-th entry from a matrix $M$ over a semiring $R$. Specifically, it maps a matrix $M$ to its entry $M_{ij}$ and satisfies the linearity properties: 1. $\text{entryLinearMap}_{i,j}(M + N) = \text{entryLinearMap}_{i,j}(M) + \text{entryLinearMap}_{i,j}(N)$ for any matrices $M, N$ 2. $\text{entryLinearMap}_{i,j}(r \cdot M) = r \cdot \text{entryLinearMap}_{i,j}(M)$ for any scalar $r \in R$ and matrix $M$
Matrix.entryLinearMap_eq_comp theorem
{i : m} {j : n} : entryLinearMap R α i j = LinearMap.proj j ∘ₗ LinearMap.proj i ∘ₗ (ofLinearEquiv R).symm.toLinearMap
Full source
lemma entryLinearMap_eq_comp {i : m} {j : n} :
    entryLinearMap R α i j =
      LinearMap.projLinearMap.proj j ∘ₗ LinearMap.proj i ∘ₗ (ofLinearEquiv R).symm.toLinearMap := by
  rfl
Matrix Entry Extraction as Composition of Projections and Equivalence
Informal description
For any row index $i$ and column index $j$, the linear map that extracts the $(i,j)$-th entry of a matrix is equal to the composition of: 1. The linear equivalence inverse that converts matrices back to functions (ofLinearEquiv R).symm 2. The projection linear map for row $i$ (LinearMap.proj i) 3. The projection linear map for column $j$ (LinearMap.proj j) In other words, for any matrix $A \in \mathrm{Matrix}\, m\, n\, \alpha$, we have: $$A_{ij} = (\text{proj}_j \circ \text{proj}_i \circ \text{ofLinearEquiv}^{-1})(A)$$
Matrix.proj_comp_diagLinearMap theorem
(i : m) : LinearMap.proj i ∘ₗ diagLinearMap m R α = entryLinearMap R α i i
Full source
@[simp] lemma proj_comp_diagLinearMap (i : m) :
    LinearMap.projLinearMap.proj i ∘ₗ diagLinearMap m R α = entryLinearMap R α i i := by
  simp [LinearMap.ext_iff]
Diagonal Projection Equals Matrix Entry Extraction
Informal description
For any index $i$ in the row/column index set $m$, the composition of the projection linear map $\text{proj}_i$ with the diagonal extraction linear map $\text{diag}$ from square matrices over $R$-module $\alpha$ equals the linear map that extracts the $(i,i)$-th entry from a matrix. In other words, for any square matrix $A \in \text{Matrix}\, m\, m\, \alpha$, we have: $$(\text{proj}_i \circ \text{diag})(A) = A_{ii}$$
Matrix.entryLinearMap_toAddMonoidHom theorem
{i : m} {j : n} : (entryLinearMap R α i j : _ →+ _) = entryAddMonoidHom α i j
Full source
@[simp] lemma entryLinearMap_toAddMonoidHom {i : m} {j : n} :
    (entryLinearMap R α i j : _ →+ _) = entryAddMonoidHom α i j := rfl
Equality of Matrix Entry Extraction as Additive Monoid Homomorphisms
Informal description
For any row index $i$ and column index $j$, the additive monoid homomorphism obtained by restricting the linear map $\text{entryLinearMap}_{R,\alpha,i,j}$ to the additive structure is equal to the additive monoid homomorphism $\text{entryAddMonoidHom}_{\alpha,i,j}$ that extracts the $(i,j)$-th entry of a matrix.
Matrix.entryLinearMap_toAddHom theorem
{i : m} {j : n} : (entryLinearMap R α i j : AddHom _ _) = entryAddHom α i j
Full source
@[simp] lemma entryLinearMap_toAddHom {i : m} {j : n} :
    (entryLinearMap R α i j : AddHom _ _) = entryAddHom α i j := rfl
Equality of Matrix Entry Extraction as Additive Homomorphisms: $\text{entryLinearMap}_{R,\alpha}(i,j)|_{\text{Add}} = \text{entryAddHom}_{\alpha}(i,j)$
Informal description
For any row index $i$ and column index $j$, the additive homomorphism obtained by restricting the linear map $\text{entryLinearMap}_{R,\alpha}(i,j)$ to its additive structure coincides with the additive homomorphism $\text{entryAddHom}_{\alpha}(i,j)$ that extracts the $(i,j)$-th entry of a matrix.
Equiv.mapMatrix definition
(f : α ≃ β) : Matrix m n α ≃ Matrix m n β
Full source
/-- The `Equiv` between spaces of matrices induced by an `Equiv` between their
coefficients. This is `Matrix.map` as an `Equiv`. -/
@[simps apply]
def mapMatrix (f : α ≃ β) : MatrixMatrix m n α ≃ Matrix m n β where
  toFun M := M.map f
  invFun M := M.map f.symm
  left_inv _ := Matrix.ext fun _ _ => f.symm_apply_apply _
  right_inv _ := Matrix.ext fun _ _ => f.apply_symm_apply _
Matrix equivalence induced by coefficient bijection
Informal description
Given a bijection $f : \alpha \simeq \beta$ between types $\alpha$ and $\beta$, the function $\text{mapMatrix}\, f$ is an equivalence between the spaces of matrices $\text{Matrix}\, m\, n\, \alpha$ and $\text{Matrix}\, m\, n\, \beta$. Specifically, it maps a matrix $M$ to the matrix obtained by applying $f$ to each entry of $M$, and its inverse applies $f^{-1}$ to each entry.
Equiv.mapMatrix_refl theorem
: (Equiv.refl α).mapMatrix = Equiv.refl (Matrix m n α)
Full source
@[simp]
theorem mapMatrix_refl : (Equiv.refl α).mapMatrix = Equiv.refl (Matrix m n α) :=
  rfl
Identity Matrix Mapping via Identity Equivalence
Informal description
The matrix mapping induced by the identity equivalence on a type $\alpha$ is equal to the identity equivalence on the space of matrices $\text{Matrix}\, m\, n\, \alpha$. In other words, applying the identity map entry-wise to a matrix yields the same matrix.
Equiv.mapMatrix_symm theorem
(f : α ≃ β) : f.mapMatrix.symm = (f.symm.mapMatrix : Matrix m n β ≃ _)
Full source
@[simp]
theorem mapMatrix_symm (f : α ≃ β) : f.mapMatrix.symm = (f.symm.mapMatrix : MatrixMatrix m n β ≃ _) :=
  rfl
Inverse of Matrix Equivalence Induced by Bijection
Informal description
Given a bijection $f : \alpha \simeq \beta$, the inverse of the matrix equivalence induced by $f$ is equal to the matrix equivalence induced by the inverse bijection $f^{-1} : \beta \simeq \alpha$. That is, $(f.\text{mapMatrix})^{-1} = f^{-1}.\text{mapMatrix}$ as equivalences between $\text{Matrix}\, m\, n\, \beta$ and $\text{Matrix}\, m\, n\, \alpha$.
Equiv.mapMatrix_trans theorem
(f : α ≃ β) (g : β ≃ γ) : f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : Matrix m n α ≃ _)
Full source
@[simp]
theorem mapMatrix_trans (f : α ≃ β) (g : β ≃ γ) :
    f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : MatrixMatrix m n α ≃ _) :=
  rfl
Composition of Matrix Equivalences via Entry-wise Bijections
Informal description
Given two bijections $f : \alpha \simeq \beta$ and $g : \beta \simeq \gamma$, the composition of the induced matrix equivalences $f.\text{mapMatrix}$ and $g.\text{mapMatrix}$ is equal to the matrix equivalence induced by the composition $f \circ g$. That is, for any matrix $M \in \text{Matrix}\, m\, n\, \alpha$, we have $(g.\text{mapMatrix} \circ f.\text{mapMatrix})(M) = (f \circ g).\text{mapMatrix}(M)$.
AddMonoidHom.mapMatrix definition
(f : α →+ β) : Matrix m n α →+ Matrix m n β
Full source
/-- The `AddMonoidHom` between spaces of matrices induced by an `AddMonoidHom` between their
coefficients. This is `Matrix.map` as an `AddMonoidHom`. -/
@[simps]
def mapMatrix (f : α →+ β) : MatrixMatrix m n α →+ Matrix m n β where
  toFun M := M.map f
  map_zero' := Matrix.map_zero f f.map_zero
  map_add' := Matrix.map_add f f.map_add
Matrix entry-wise map as additive monoid homomorphism
Informal description
Given an additive monoid homomorphism $f : \alpha \to \beta$, the function $\text{mapMatrix}\, f$ maps a matrix $M$ over $\alpha$ to a matrix over $\beta$ by applying $f$ to each entry of $M$. This function preserves the zero matrix and matrix addition, making it an additive monoid homomorphism between matrix spaces. More precisely: 1. $\text{mapMatrix}\, f\, 0 = 0$ (preservation of zero matrix) 2. $\text{mapMatrix}\, f\, (A + B) = \text{mapMatrix}\, f\, A + \text{mapMatrix}\, f\, B$ for all matrices $A, B$ (preservation of addition)
AddMonoidHom.mapMatrix_id theorem
: (AddMonoidHom.id α).mapMatrix = AddMonoidHom.id (Matrix m n α)
Full source
@[simp]
theorem mapMatrix_id : (AddMonoidHom.id α).mapMatrix = AddMonoidHom.id (Matrix m n α) :=
  rfl
Identity Matrix Mapping Preserves Identity Homomorphism
Informal description
For any type $\alpha$ with an additive zero class structure, the matrix mapping induced by the identity additive monoid homomorphism on $\alpha$ is equal to the identity additive monoid homomorphism on the space of matrices $\text{Matrix}\, m\, n\, \alpha$.
AddMonoidHom.mapMatrix_comp theorem
(f : β →+ γ) (g : α →+ β) : f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : Matrix m n α →+ _)
Full source
@[simp]
theorem mapMatrix_comp (f : β →+ γ) (g : α →+ β) :
    f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : MatrixMatrix m n α →+ _) :=
  rfl
Composition of Matrix Maps Preserves Additive Monoid Homomorphisms
Informal description
Let $f : \beta \to \gamma$ and $g : \alpha \to \beta$ be additive monoid homomorphisms. Then the composition of the matrix maps induced by $f$ and $g$ is equal to the matrix map induced by the composition $f \circ g$. More precisely, for any matrix $A \in \text{Matrix}\, m\, n\, \alpha$, we have: $$(f.\text{mapMatrix} \circ g.\text{mapMatrix})(A) = (f \circ g).\text{mapMatrix}(A)$$
AddMonoidHom.entryAddMonoidHom_comp_mapMatrix theorem
(f : α →+ β) (i : m) (j : n) : (entryAddMonoidHom β i j).comp f.mapMatrix = f.comp (entryAddMonoidHom α i j)
Full source
@[simp] lemma entryAddMonoidHom_comp_mapMatrix (f : α →+ β) (i : m) (j : n) :
    (entryAddMonoidHom β i j).comp f.mapMatrix = f.comp (entryAddMonoidHom α i j) := rfl
Commutativity of Entry Extraction with Matrix Mapping for Additive Monoid Homomorphisms
Informal description
Let $f : \alpha \to \beta$ be an additive monoid homomorphism between additive monoids $\alpha$ and $\beta$. For any matrix indices $i \in m$ and $j \in n$, the composition of the entry extraction homomorphism $\text{entryAddMonoidHom}_{\beta}\, i\, j$ (which extracts the $(i,j)$-th entry from a matrix over $\beta$) with the matrix mapping homomorphism $\text{mapMatrix}\, f$ (which applies $f$ entrywise to a matrix over $\alpha$) is equal to the composition of $f$ with the entry extraction homomorphism $\text{entryAddMonoidHom}_{\alpha}\, i\, j$ (which extracts the $(i,j)$-th entry from a matrix over $\alpha$). In other words, the following diagram commutes for all $i,j$: $$ \begin{CD} \text{Matrix}\, m\, n\, \alpha @>{\text{mapMatrix}\, f}>> \text{Matrix}\, m\, n\, \beta \\ @V{\text{entryAddMonoidHom}_{\alpha}\, i\, j}VV @VV{\text{entryAddMonoidHom}_{\beta}\, i\, j}V \\ \alpha @>{f}>> \beta \end{CD} $$
AddEquiv.mapMatrix definition
(f : α ≃+ β) : Matrix m n α ≃+ Matrix m n β
Full source
/-- The `AddEquiv` between spaces of matrices induced by an `AddEquiv` between their
coefficients. This is `Matrix.map` as an `AddEquiv`. -/
@[simps apply]
def mapMatrix (f : α ≃+ β) : MatrixMatrix m n α ≃+ Matrix m n β :=
  { f.toEquiv.mapMatrix with
    toFun := fun M => M.map f
    invFun := fun M => M.map f.symm
    map_add' := Matrix.map_add f (map_add f) }
Matrix equivalence induced by additive coefficient bijection
Informal description
Given an additive equivalence (additive isomorphism) $f : \alpha \simeq \beta$ between types $\alpha$ and $\beta$ equipped with addition operations, the function $\text{mapMatrix}\, f$ is an additive equivalence between the spaces of matrices $\text{Matrix}\, m\, n\, \alpha$ and $\text{Matrix}\, m\, n\, \beta$. Specifically, it maps a matrix $M$ to the matrix obtained by applying $f$ to each entry of $M$, and its inverse applies $f^{-1}$ to each entry. The operation preserves matrix addition, meaning that for any two matrices $A$ and $B$, we have $\text{mapMatrix}\, f (A + B) = \text{mapMatrix}\, f (A) + \text{mapMatrix}\, f (B)$.
AddEquiv.mapMatrix_refl theorem
: (AddEquiv.refl α).mapMatrix = AddEquiv.refl (Matrix m n α)
Full source
@[simp]
theorem mapMatrix_refl : (AddEquiv.refl α).mapMatrix = AddEquiv.refl (Matrix m n α) :=
  rfl
Matrix Mapping Preserves Additive Identity Equivalence
Informal description
The matrix mapping of the additive identity equivalence $\text{refl}_\alpha$ is equal to the additive identity equivalence on the space of matrices $\text{Matrix}\, m\, n\, \alpha$. That is, applying the identity map entrywise to a matrix yields the same matrix.
AddEquiv.mapMatrix_symm theorem
(f : α ≃+ β) : f.mapMatrix.symm = (f.symm.mapMatrix : Matrix m n β ≃+ _)
Full source
@[simp]
theorem mapMatrix_symm (f : α ≃+ β) : f.mapMatrix.symm = (f.symm.mapMatrix : MatrixMatrix m n β ≃+ _) :=
  rfl
Inverse of Matrix Mapping Equivalence via Additive Isomorphism
Informal description
For any additive equivalence (additive isomorphism) $f : \alpha \simeq \beta$ between types $\alpha$ and $\beta$ with addition operations, the inverse of the matrix mapping equivalence $\text{mapMatrix}\, f$ is equal to the matrix mapping equivalence of the inverse $\text{mapMatrix}\, f^{-1}$. That is, $(\text{mapMatrix}\, f)^{-1} = \text{mapMatrix}\, f^{-1}$ as additive equivalences between $\text{Matrix}\, m\, n\, \beta$ and $\text{Matrix}\, m\, n\, \alpha$.
AddEquiv.mapMatrix_trans theorem
(f : α ≃+ β) (g : β ≃+ γ) : f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : Matrix m n α ≃+ _)
Full source
@[simp]
theorem mapMatrix_trans (f : α ≃+ β) (g : β ≃+ γ) :
    f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : MatrixMatrix m n α ≃+ _) :=
  rfl
Composition of Matrix Equivalences via Additive Isomorphisms
Informal description
Given additive equivalences (additive isomorphisms) $f : \alpha \simeq \beta$ and $g : \beta \simeq \gamma$, the composition of the induced matrix equivalences $\text{mapMatrix}\, f$ and $\text{mapMatrix}\, g$ is equal to the matrix equivalence induced by the composition $f \circ g$. That is, for any matrix $M \in \text{Matrix}\, m\, n\, \alpha$, we have: $$(\text{mapMatrix}\, f \circ \text{mapMatrix}\, g)(M) = \text{mapMatrix}\, (f \circ g)(M)$$
AddEquiv.entryAddHom_comp_mapMatrix theorem
(f : α ≃+ β) (i : m) (j : n) : (entryAddHom β i j).comp (AddHomClass.toAddHom f.mapMatrix) = (f : AddHom α β).comp (entryAddHom _ i j)
Full source
@[simp] lemma entryAddHom_comp_mapMatrix (f : α ≃+ β) (i : m) (j : n) :
    (entryAddHom β i j).comp (AddHomClass.toAddHom f.mapMatrix) =
      (f : AddHom α β).comp (entryAddHom _ i j) := rfl
Commutativity of Entry Extraction with Matrix Map Under Additive Equivalence
Informal description
For any additive equivalence (additive isomorphism) $f : \alpha \simeq \beta$ between types $\alpha$ and $\beta$ with addition operations, and for any row index $i$ and column index $j$, the composition of the $(i,j)$-th entry extraction homomorphism on $\beta$-valued matrices with the additive homomorphism induced by $f$ on matrices is equal to the composition of $f$ with the $(i,j)$-th entry extraction homomorphism on $\alpha$-valued matrices. In other words, the following diagram commutes for all $i,j$: $$\begin{array}{ccc} \text{Matrix}\, m\, n\, \alpha & \xrightarrow{f.\text{mapMatrix}} & \text{Matrix}\, m\, n\, \beta \\ \downarrow \text{entryAddHom}\, \alpha\, i\, j & & \downarrow \text{entryAddHom}\, \beta\, i\, j \\ \alpha & \xrightarrow{f} & \beta \end{array}$$
LinearMap.mapMatrix definition
(f : α →ₗ[R] β) : Matrix m n α →ₗ[R] Matrix m n β
Full source
/-- The `LinearMap` between spaces of matrices induced by a `LinearMap` between their
coefficients. This is `Matrix.map` as a `LinearMap`. -/
@[simps]
def mapMatrix (f : α →ₗ[R] β) : MatrixMatrix m n α →ₗ[R] Matrix m n β where
  toFun M := M.map f
  map_add' := Matrix.map_add f f.map_add
  map_smul' r := Matrix.map_smul f r (f.map_smul r)
Entrywise linear map on matrices
Informal description
Given a linear map $f : \alpha \to_{R} \beta$ between modules over a semiring $R$, the linear map $\text{mapMatrix}\, f : \text{Matrix}\, m\, n\, \alpha \to_{R} \text{Matrix}\, m\, n\, \beta$ applies $f$ entrywise to each element of the input matrix. That is, for any matrix $M$, the $(i,j)$-th entry of $\text{mapMatrix}\, f\, M$ is $f(M_{ij})$.
LinearMap.mapMatrix_id theorem
: LinearMap.id.mapMatrix = (LinearMap.id : Matrix m n α →ₗ[R] _)
Full source
@[simp]
theorem mapMatrix_id : LinearMap.id.mapMatrix = (LinearMap.id : MatrixMatrix m n α →ₗ[R] _) :=
  rfl
Identity Linear Map Preserved Under Entrywise Application to Matrices
Informal description
The entrywise application of the identity linear map to a matrix is equal to the identity linear map on matrices. That is, for any matrix $A \in \mathrm{Matrix}\, m\, n\, \alpha$, we have $\mathrm{id.mapMatrix}\, A = \mathrm{id}\, A$.
LinearMap.mapMatrix_comp theorem
(f : β →ₗ[R] γ) (g : α →ₗ[R] β) : f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : Matrix m n α →ₗ[R] _)
Full source
@[simp]
theorem mapMatrix_comp (f : β →ₗ[R] γ) (g : α →ₗ[R] β) :
    f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : MatrixMatrix m n α →ₗ[R] _) :=
  rfl
Composition of Entrywise Linear Maps on Matrices
Informal description
Let $R$ be a semiring, and let $\alpha, \beta, \gamma$ be modules over $R$. For any linear maps $g : \alpha \to_{R} \beta$ and $f : \beta \to_{R} \gamma$, the composition of the entrywise matrix maps $f.\text{mapMatrix}$ and $g.\text{mapMatrix}$ is equal to the entrywise matrix map of the composition $f \circ g$. That is, \[ f.\text{mapMatrix} \circ g.\text{mapMatrix} = (f \circ g).\text{mapMatrix} \] as linear maps from $\text{Matrix}\, m\, n\, \alpha$ to $\text{Matrix}\, m\, n\, \gamma$.
LinearMap.entryLinearMap_comp_mapMatrix theorem
(f : α →ₗ[R] β) (i : m) (j : n) : entryLinearMap R _ i j ∘ₗ f.mapMatrix = f ∘ₗ entryLinearMap R _ i j
Full source
@[simp] lemma entryLinearMap_comp_mapMatrix (f : α →ₗ[R] β) (i : m) (j : n) :
    entryLinearMapentryLinearMap R _ i j ∘ₗ f.mapMatrix = f ∘ₗ entryLinearMap R _ i j := rfl
Commutativity of Entrywise Matrix Map and Entry Extraction
Informal description
For any linear map $f : \alpha \to_{R} \beta$ between modules over a semiring $R$, and for any row index $i$ and column index $j$, the composition of the $(i,j)$-th entry extraction linear map with the entrywise matrix map of $f$ is equal to the composition of $f$ with the $(i,j)$-th entry extraction linear map. That is, for any matrix $M \in \mathrm{Matrix}\, m\, n\, \alpha$, we have: $$(\mathrm{entryLinearMap}_{R,\alpha}\, i\, j) \circ (\mathrm{mapMatrix}\, f) = f \circ (\mathrm{entryLinearMap}_{R,\alpha}\, i\, j)$$
LinearEquiv.mapMatrix definition
(f : α ≃ₗ[R] β) : Matrix m n α ≃ₗ[R] Matrix m n β
Full source
/-- The `LinearEquiv` between spaces of matrices induced by a `LinearEquiv` between their
coefficients. This is `Matrix.map` as a `LinearEquiv`. -/
@[simps apply]
def mapMatrix (f : α ≃ₗ[R] β) : MatrixMatrix m n α ≃ₗ[R] Matrix m n β :=
  { f.toEquiv.mapMatrix,
    f.toLinearMap.mapMatrix with
    toFun := fun M => M.map f
    invFun := fun M => M.map f.symm }
Entrywise linear equivalence on matrices
Informal description
Given a linear equivalence \( f : \alpha \simeq_{R} \beta \) between modules over a semiring \( R \), the linear equivalence \( \text{mapMatrix}\, f : \text{Matrix}\, m\, n\, \alpha \simeq_{R} \text{Matrix}\, m\, n\, \beta \) applies \( f \) entrywise to each element of the input matrix. The inverse operation applies the inverse linear equivalence \( f^{-1} \) entrywise.
LinearEquiv.mapMatrix_refl theorem
: (LinearEquiv.refl R α).mapMatrix = LinearEquiv.refl R (Matrix m n α)
Full source
@[simp]
theorem mapMatrix_refl : (LinearEquiv.refl R α).mapMatrix = LinearEquiv.refl R (Matrix m n α) :=
  rfl
Identity Matrix Map Equals Identity on Matrices
Informal description
The entrywise matrix map of the identity linear equivalence on $\alpha$ is equal to the identity linear equivalence on the space of matrices $\mathrm{Matrix}\, m\, n\, \alpha$.
LinearEquiv.mapMatrix_symm theorem
(f : α ≃ₗ[R] β) : f.mapMatrix.symm = (f.symm.mapMatrix : Matrix m n β ≃ₗ[R] _)
Full source
@[simp]
theorem mapMatrix_symm (f : α ≃ₗ[R] β) :
    f.mapMatrix.symm = (f.symm.mapMatrix : MatrixMatrix m n β ≃ₗ[R] _) :=
  rfl
Inverse of Entrywise Matrix Map Equals Entrywise Map of Inverse
Informal description
Given a linear equivalence $f : \alpha \simeq_{R} \beta$ between modules over a semiring $R$, the inverse of the entrywise matrix map $f_{\text{mapMatrix}}$ is equal to the entrywise matrix map of the inverse linear equivalence $f^{-1}$. That is, $(f_{\text{mapMatrix}})^{-1} = (f^{-1})_{\text{mapMatrix}}$.
LinearEquiv.mapMatrix_trans theorem
(f : α ≃ₗ[R] β) (g : β ≃ₗ[R] γ) : f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : Matrix m n α ≃ₗ[R] _)
Full source
@[simp]
theorem mapMatrix_trans (f : α ≃ₗ[R] β) (g : β ≃ₗ[R] γ) :
    f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : MatrixMatrix m n α ≃ₗ[R] _) :=
  rfl
Composition of Entrywise Matrix Maps Equals Matrix Map of Composition
Informal description
Let $R$ be a semiring, and let $\alpha$, $\beta$, and $\gamma$ be modules over $R$. Given linear equivalences $f : \alpha \simeq_{R} \beta$ and $g : \beta \simeq_{R} \gamma$, the composition of their entrywise matrix maps $f.\mathrm{mapMatrix}$ and $g.\mathrm{mapMatrix}$ is equal to the entrywise matrix map of the composition $f \circ g$, i.e., \[ f.\mathrm{mapMatrix} \circ g.\mathrm{mapMatrix} = (f \circ g).\mathrm{mapMatrix}. \]
LinearEquiv.mapMatrix_toLinearMap theorem
(f : α ≃ₗ[R] β) : (f.mapMatrix : _ ≃ₗ[R] Matrix m n β).toLinearMap = f.toLinearMap.mapMatrix
Full source
@[simp] lemma mapMatrix_toLinearMap (f : α ≃ₗ[R] β) :
    (f.mapMatrix : _ ≃ₗ[R] Matrix m n β).toLinearMap = f.toLinearMap.mapMatrix := by
  rfl
Equality of Linear Maps from Matrix Linear Equivalence
Informal description
Given a linear equivalence $f : \alpha \simeq_{R} \beta$ between modules over a semiring $R$, the linear map obtained from the entrywise linear equivalence $f.\text{mapMatrix}$ on matrices is equal to the entrywise linear map $f.\text{toLinearMap}.\text{mapMatrix}$.
LinearEquiv.entryLinearMap_comp_mapMatrix theorem
(f : α ≃ₗ[R] β) (i : m) (j : n) : entryLinearMap R _ i j ∘ₗ f.mapMatrix.toLinearMap = f.toLinearMap ∘ₗ entryLinearMap R _ i j
Full source
@[simp] lemma entryLinearMap_comp_mapMatrix (f : α ≃ₗ[R] β) (i : m) (j : n) :
    entryLinearMapentryLinearMap R _ i j ∘ₗ f.mapMatrix.toLinearMap =
      f.toLinearMap ∘ₗ entryLinearMap R _ i j := by
  simp only [mapMatrix_toLinearMap, LinearMap.entryLinearMap_comp_mapMatrix]
Commutativity of Entry Extraction with Matrix Linear Equivalence
Informal description
For any linear equivalence $f : \alpha \simeq_{R} \beta$ between modules over a semiring $R$, and for any row index $i$ and column index $j$, the composition of the $(i,j)$-th entry extraction linear map with the linear map induced by $f$ on matrices is equal to the composition of $f$ with the $(i,j)$-th entry extraction linear map. In symbols: \[ \text{entryLinearMap}_{R, \beta}\, i\, j \circ f_{\text{matrix}} = f \circ \text{entryLinearMap}_{R, \alpha}\, i\, j \] where $f_{\text{matrix}}$ denotes the entrywise application of $f$ to matrices.
RingHom.mapMatrix definition
(f : α →+* β) : Matrix m m α →+* Matrix m m β
Full source
/-- The `RingHom` between spaces of square matrices induced by a `RingHom` between their
coefficients. This is `Matrix.map` as a `RingHom`. -/
@[simps]
def mapMatrix (f : α →+* β) : MatrixMatrix m m α →+* Matrix m m β :=
  { f.toAddMonoidHom.mapMatrix with
    toFun := fun M => M.map f
    map_one' := by simp
    map_mul' := fun _ _ => Matrix.map_mul }
Ring homomorphism induced on matrix rings by coefficient ring homomorphism
Informal description
Given a ring homomorphism $f : \alpha \to \beta$, the function $\text{mapMatrix}\, f$ maps a square matrix $M$ over $\alpha$ to a square matrix over $\beta$ by applying $f$ to each entry of $M$. This function preserves the identity matrix and matrix multiplication, making it a ring homomorphism between matrix rings. More precisely: 1. $\text{mapMatrix}\, f\, I = I$ (preservation of identity matrix) 2. $\text{mapMatrix}\, f\, (A \cdot B) = (\text{mapMatrix}\, f\, A) \cdot (\text{mapMatrix}\, f\, B)$ for all matrices $A, B$ (preservation of multiplication)
RingHom.mapMatrix_id theorem
: (RingHom.id α).mapMatrix = RingHom.id (Matrix m m α)
Full source
@[simp]
theorem mapMatrix_id : (RingHom.id α).mapMatrix = RingHom.id (Matrix m m α) :=
  rfl
Identity Ring Homomorphism Preserves Matrix Mapping Identity
Informal description
The matrix mapping induced by the identity ring homomorphism on a ring $\alpha$ is equal to the identity ring homomorphism on the matrix ring $\text{Matrix}\,m\,m\,\alpha$. That is, $\text{mapMatrix}\,(\text{id}_\alpha) = \text{id}_{\text{Matrix}\,m\,m\,\alpha}$.
RingHom.mapMatrix_comp theorem
(f : β →+* γ) (g : α →+* β) : f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : Matrix m m α →+* _)
Full source
@[simp]
theorem mapMatrix_comp (f : β →+* γ) (g : α →+* β) :
    f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : MatrixMatrix m m α →+* _) :=
  rfl
Composition of Matrix Ring Homomorphisms via Coefficient Ring Homomorphisms
Informal description
Let $f : \beta \to \gamma$ and $g : \alpha \to \beta$ be ring homomorphisms. Then the composition of the induced matrix ring homomorphisms $f.\text{mapMatrix} \circ g.\text{mapMatrix}$ is equal to the matrix ring homomorphism induced by the composition $f \circ g$, i.e., $$ f.\text{mapMatrix} \circ g.\text{mapMatrix} = (f \circ g).\text{mapMatrix} $$ as ring homomorphisms from the matrix ring over $\alpha$ to the matrix ring over $\gamma$.
RingEquiv.mapMatrix definition
(f : α ≃+* β) : Matrix m m α ≃+* Matrix m m β
Full source
/-- The `RingEquiv` between spaces of square matrices induced by a `RingEquiv` between their
coefficients. This is `Matrix.map` as a `RingEquiv`. -/
@[simps apply]
def mapMatrix (f : α ≃+* β) : MatrixMatrix m m α ≃+* Matrix m m β :=
  { f.toRingHom.mapMatrix,
    f.toAddEquiv.mapMatrix with
    toFun := fun M => M.map f
    invFun := fun M => M.map f.symm }
Ring isomorphism induced by coefficient ring isomorphism on matrix rings
Informal description
Given a ring isomorphism $f : \alpha \simeq \beta$ between rings $\alpha$ and $\beta$, the function $\text{mapMatrix}\, f$ is a ring isomorphism between the matrix rings $\text{Matrix}\, m\, m\, \alpha$ and $\text{Matrix}\, m\, m\, \beta$. Specifically, it maps a square matrix $M$ over $\alpha$ to a square matrix over $\beta$ by applying $f$ to each entry of $M$, and its inverse applies $f^{-1}$ to each entry. This isomorphism preserves both the additive and multiplicative structures of the matrix rings. More precisely: 1. $\text{mapMatrix}\, f$ is bijective, with inverse $\text{mapMatrix}\, f^{-1}$. 2. $\text{mapMatrix}\, f\, (A + B) = \text{mapMatrix}\, f\, A + \text{mapMatrix}\, f\, B$ for all matrices $A, B$ (preservation of addition). 3. $\text{mapMatrix}\, f\, (A \cdot B) = (\text{mapMatrix}\, f\, A) \cdot (\text{mapMatrix}\, f\, B)$ for all matrices $A, B$ (preservation of multiplication). 4. $\text{mapMatrix}\, f\, I = I$ (preservation of identity matrix).
RingEquiv.mapMatrix_refl theorem
: (RingEquiv.refl α).mapMatrix = RingEquiv.refl (Matrix m m α)
Full source
@[simp]
theorem mapMatrix_refl : (RingEquiv.refl α).mapMatrix = RingEquiv.refl (Matrix m m α) :=
  rfl
Identity Ring Isomorphism Induces Identity Matrix Ring Isomorphism
Informal description
The matrix ring isomorphism induced by the identity ring isomorphism $\text{id} : \alpha \simeq \alpha$ is equal to the identity isomorphism on the matrix ring $\text{Matrix}\, m\, m\, \alpha$.
RingEquiv.mapMatrix_symm theorem
(f : α ≃+* β) : f.mapMatrix.symm = (f.symm.mapMatrix : Matrix m m β ≃+* _)
Full source
@[simp]
theorem mapMatrix_symm (f : α ≃+* β) : f.mapMatrix.symm = (f.symm.mapMatrix : MatrixMatrix m m β ≃+* _) :=
  rfl
Inverse of Matrix Ring Isomorphism Induced by Ring Isomorphism
Informal description
Given a ring isomorphism $f : \alpha \simeq \beta$, the inverse of the induced matrix ring isomorphism $\text{mapMatrix}\, f$ is equal to the matrix ring isomorphism induced by the inverse $f^{-1} : \beta \simeq \alpha$. That is, $(\text{mapMatrix}\, f)^{-1} = \text{mapMatrix}\, f^{-1}$.
RingEquiv.mapMatrix_trans theorem
(f : α ≃+* β) (g : β ≃+* γ) : f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : Matrix m m α ≃+* _)
Full source
@[simp]
theorem mapMatrix_trans (f : α ≃+* β) (g : β ≃+* γ) :
    f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : MatrixMatrix m m α ≃+* _) :=
  rfl
Composition of Matrix Ring Isomorphisms via Coefficient Ring Isomorphisms
Informal description
Let $f : \alpha \simeq \beta$ and $g : \beta \simeq \gamma$ be ring isomorphisms. Then the composition of the induced matrix ring isomorphisms $f.\text{mapMatrix} : \text{Matrix}\, m\, m\, \alpha \simeq \text{Matrix}\, m\, m\, \beta$ and $g.\text{mapMatrix} : \text{Matrix}\, m\, m\, \beta \simeq \text{Matrix}\, m\, m\, \gamma$ is equal to the matrix ring isomorphism induced by the composition $f \circ g : \alpha \simeq \gamma$. That is, we have: $$ f.\text{mapMatrix} \circ g.\text{mapMatrix} = (f \circ g).\text{mapMatrix} $$
RingEquiv.mopMatrix definition
: Matrix m m αᵐᵒᵖ ≃+* (Matrix m m α)ᵐᵒᵖ
Full source
/--
For any ring `R`, we have ring isomorphism `Matₙₓₙ(Rᵒᵖ) ≅ (Matₙₓₙ(R))ᵒᵖ` given by transpose.
-/
@[simps apply symm_apply]
def mopMatrix : MatrixMatrix m m αᵐᵒᵖ ≃+* (Matrix m m α)ᵐᵒᵖ where
  toFun M := op (M.transpose.map unop)
  invFun M := M.unop.transpose.map op
  left_inv _ := by aesop
  right_inv _ := by aesop
  map_mul' _ _ := unop_injective <| by ext; simp [transpose, mul_apply]
  map_add' _ _ := by aesop
Ring isomorphism between $\text{Mat}_n(R^\text{op})$ and $(\text{Mat}_n(R))^\text{op}$ via transpose
Informal description
For any ring $R$, there is a ring isomorphism between the ring of $n \times n$ matrices over the opposite ring $R^\text{op}$ and the opposite ring of $n \times n$ matrices over $R$. This isomorphism is given by transposing the matrix and applying the canonical projection $\text{unop} : R^\text{op} \to R$ (or $\text{op} : R \to R^\text{op}$ for the inverse map). More precisely: 1. The forward map takes a matrix $M \in \text{Mat}_n(R^\text{op})$, transposes it to get $M^\top$, then applies $\text{unop}$ to each entry to obtain a matrix in $\text{Mat}_n(R)$, and finally takes the opposite of this matrix. 2. The inverse map reverses this process: it takes a matrix $M \in (\text{Mat}_n(R))^\text{op}$, applies $\text{unop}$ to get a matrix in $\text{Mat}_n(R)$, transposes it, and applies $\text{op}$ to each entry to obtain a matrix in $\text{Mat}_n(R^\text{op})$. This isomorphism preserves both the additive and multiplicative structures of the matrix rings.
AlgHom.mapMatrix definition
(f : α →ₐ[R] β) : Matrix m m α →ₐ[R] Matrix m m β
Full source
/-- The `AlgHom` between spaces of square matrices induced by an `AlgHom` between their
coefficients. This is `Matrix.map` as an `AlgHom`. -/
@[simps]
def mapMatrix (f : α →ₐ[R] β) : MatrixMatrix m m α →ₐ[R] Matrix m m β :=
  { f.toRingHom.mapMatrix with
    toFun := fun M => M.map f
    commutes' := fun r => Matrix.map_algebraMap r f (map_zero _) (f.commutes r) }
Algebra homomorphism induced on matrix algebras by coefficient algebra homomorphism
Informal description
Given an algebra homomorphism $f : \alpha \to \beta$ over a commutative semiring $R$, the function $\text{mapMatrix}\, f$ maps a square matrix $M$ over $\alpha$ to a square matrix over $\beta$ by applying $f$ to each entry of $M$. This function preserves the algebra structure, including matrix addition, multiplication, scalar multiplication, and the identity matrix. More precisely: 1. $\text{mapMatrix}\, f\, (A + B) = \text{mapMatrix}\, f\, A + \text{mapMatrix}\, f\, B$ for all matrices $A, B$ (additivity) 2. $\text{mapMatrix}\, f\, (A \cdot B) = (\text{mapMatrix}\, f\, A) \cdot (\text{mapMatrix}\, f\, B)$ for all matrices $A, B$ (multiplicativity) 3. $\text{mapMatrix}\, f\, (r \cdot A) = r \cdot (\text{mapMatrix}\, f\, A)$ for all $r \in R$ and matrices $A$ (scalar multiplication preservation) 4. $\text{mapMatrix}\, f\, I = I$ (preservation of identity matrix)
AlgHom.mapMatrix_id theorem
: (AlgHom.id R α).mapMatrix = AlgHom.id R (Matrix m m α)
Full source
@[simp]
theorem mapMatrix_id : (AlgHom.id R α).mapMatrix = AlgHom.id R (Matrix m m α) :=
  rfl
Identity Preservation under Matrix Mapping of Algebra Homomorphisms
Informal description
For any commutative semiring $R$ and algebra $\alpha$ over $R$, the matrix mapping of the identity algebra homomorphism $\text{id}_R : \alpha \to \alpha$ is equal to the identity algebra homomorphism on the algebra of square matrices over $\alpha$. In other words, applying the identity homomorphism entrywise to any square matrix $M$ over $\alpha$ yields the same matrix $M$, i.e., $\text{mapMatrix}(\text{id}_R) = \text{id}_R$ as maps on $\text{Matrix}\,m\,m\,\alpha$.
AlgHom.mapMatrix_comp theorem
(f : β →ₐ[R] γ) (g : α →ₐ[R] β) : f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : Matrix m m α →ₐ[R] _)
Full source
@[simp]
theorem mapMatrix_comp (f : β →ₐ[R] γ) (g : α →ₐ[R] β) :
    f.mapMatrix.comp g.mapMatrix = ((f.comp g).mapMatrix : MatrixMatrix m m α →ₐ[R] _) :=
  rfl
Composition of Matrix Algebra Homomorphisms via Entrywise Composition
Informal description
Let $R$ be a commutative semiring, and let $\alpha, \beta, \gamma$ be $R$-algebras. Given algebra homomorphisms $g : \alpha \to \beta$ and $f : \beta \to \gamma$, the composition of the induced matrix algebra homomorphisms $f_{\text{mapMatrix}} \circ g_{\text{mapMatrix}}$ equals the matrix algebra homomorphism induced by the composition $f \circ g$. That is, for any square matrix $M$ over $\alpha$, applying $g$ to each entry and then $f$ is equivalent to applying the composed homomorphism $f \circ g$ directly to each entry. In symbols: $$ f_{\text{mapMatrix}} \circ g_{\text{mapMatrix}} = (f \circ g)_{\text{mapMatrix}} $$
AlgEquiv.mapMatrix definition
(f : α ≃ₐ[R] β) : Matrix m m α ≃ₐ[R] Matrix m m β
Full source
/-- The `AlgEquiv` between spaces of square matrices induced by an `AlgEquiv` between their
coefficients. This is `Matrix.map` as an `AlgEquiv`. -/
@[simps apply]
def mapMatrix (f : α ≃ₐ[R] β) : MatrixMatrix m m α ≃ₐ[R] Matrix m m β :=
  { f.toAlgHom.mapMatrix,
    f.toRingEquiv.mapMatrix with
    toFun := fun M => M.map f
    invFun := fun M => M.map f.symm }
Algebra equivalence induced on matrix algebras by coefficient algebra equivalence
Informal description
Given an algebra equivalence $f : \alpha \simeq \beta$ over a commutative semiring $R$, the function $\text{mapMatrix}\, f$ is an algebra equivalence between the spaces of square matrices $\text{Matrix}\, m\, m\, \alpha$ and $\text{Matrix}\, m\, m\, \beta$. This equivalence maps a matrix $M$ over $\alpha$ to a matrix over $\beta$ by applying $f$ to each entry of $M$, and its inverse applies $f^{-1}$ to each entry. It preserves the algebra structure, including matrix addition, multiplication, scalar multiplication, and the identity matrix. More precisely: 1. $\text{mapMatrix}\, f$ is bijective, with inverse $\text{mapMatrix}\, f^{-1}$. 2. $\text{mapMatrix}\, f\, (A + B) = \text{mapMatrix}\, f\, A + \text{mapMatrix}\, f\, B$ for all matrices $A, B$ (preservation of addition). 3. $\text{mapMatrix}\, f\, (A \cdot B) = (\text{mapMatrix}\, f\, A) \cdot (\text{mapMatrix}\, f\, B)$ for all matrices $A, B$ (preservation of multiplication). 4. $\text{mapMatrix}\, f\, (r \cdot A) = r \cdot (\text{mapMatrix}\, f\, A)$ for all $r \in R$ and matrices $A$ (preservation of scalar multiplication). 5. $\text{mapMatrix}\, f\, I = I$ (preservation of identity matrix).
AlgEquiv.mapMatrix_refl theorem
: AlgEquiv.refl.mapMatrix = (AlgEquiv.refl : Matrix m m α ≃ₐ[R] _)
Full source
@[simp]
theorem mapMatrix_refl : AlgEquiv.refl.mapMatrix = (AlgEquiv.refl : MatrixMatrix m m α ≃ₐ[R] _) :=
  rfl
Reflexivity of Matrix Algebra Equivalence Induced by Reflexive Coefficient Equivalence
Informal description
The matrix algebra equivalence induced by the reflexive algebra equivalence $\text{AlgEquiv.refl}$ on $\alpha$ is equal to the reflexive algebra equivalence on the space of square matrices $\text{Matrix}\, m\, m\, \alpha$ over $R$.
AlgEquiv.mapMatrix_symm theorem
(f : α ≃ₐ[R] β) : f.mapMatrix.symm = (f.symm.mapMatrix : Matrix m m β ≃ₐ[R] _)
Full source
@[simp]
theorem mapMatrix_symm (f : α ≃ₐ[R] β) :
    f.mapMatrix.symm = (f.symm.mapMatrix : MatrixMatrix m m β ≃ₐ[R] _) :=
  rfl
Inverse of Matrix Algebra Equivalence Induced by Coefficient Equivalence
Informal description
Given an algebra equivalence $f : \alpha \simeq \beta$ over a commutative semiring $R$, the inverse of the induced matrix algebra equivalence $\text{mapMatrix}\, f$ is equal to the matrix algebra equivalence induced by the inverse $f^{-1} : \beta \simeq \alpha$. That is, $(\text{mapMatrix}\, f)^{-1} = \text{mapMatrix}\, f^{-1}$ as algebra equivalences between $\text{Matrix}\, m\, m\, \beta$ and $\text{Matrix}\, m\, m\, \alpha$.
AlgEquiv.mapMatrix_trans theorem
(f : α ≃ₐ[R] β) (g : β ≃ₐ[R] γ) : f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : Matrix m m α ≃ₐ[R] _)
Full source
@[simp]
theorem mapMatrix_trans (f : α ≃ₐ[R] β) (g : β ≃ₐ[R] γ) :
    f.mapMatrix.trans g.mapMatrix = ((f.trans g).mapMatrix : MatrixMatrix m m α ≃ₐ[R] _) :=
  rfl
Composition of Matrix Algebra Equivalences via Coefficient Algebra Equivalences
Informal description
Given algebra equivalences $f : \alpha \simeq \beta$ and $g : \beta \simeq \gamma$ over a commutative semiring $R$, the composition of the induced matrix algebra equivalences $f.\text{mapMatrix}$ and $g.\text{mapMatrix}$ is equal to the matrix algebra equivalence induced by the composition $f \circ g$. That is, for square matrices over $\alpha$, $\beta$, and $\gamma$ indexed by a finite type $m$, we have: $$ f.\text{mapMatrix} \circ g.\text{mapMatrix} = (f \circ g).\text{mapMatrix} $$
AlgEquiv.mopMatrix definition
: Matrix m m αᵐᵒᵖ ≃ₐ[R] (Matrix m m α)ᵐᵒᵖ
Full source
/-- For any algebra `α` over a ring `R`, we have an `R`-algebra isomorphism
`Matₙₓₙ(αᵒᵖ) ≅ (Matₙₓₙ(R))ᵒᵖ` given by transpose. If `α` is commutative,
we can get rid of the `ᵒᵖ` in the left-hand side, see `Matrix.transposeAlgEquiv`. -/
@[simps!] def mopMatrix : MatrixMatrix m m αᵐᵒᵖ ≃ₐ[R] (Matrix m m α)ᵐᵒᵖ where
  __ := RingEquiv.mopMatrix
  commutes' _ := MulOpposite.unop_injective <| by
    ext; simp [algebraMap_matrix_apply, eq_comm, apply_ite MulOpposite.unop]
Algebra isomorphism between $\text{Mat}_m(\alpha^\text{op})$ and $(\text{Mat}_m(\alpha))^\text{op}$ via transpose
Informal description
For any algebra $\alpha$ over a commutative semiring $R$, there is an $R$-algebra isomorphism between the algebra of $m \times m$ matrices over the opposite algebra $\alpha^\text{op}$ and the opposite algebra of $m \times m$ matrices over $\alpha$. This isomorphism is given by transposing the matrix and applying the canonical projection $\text{unop} : \alpha^\text{op} \to \alpha$ (or $\text{op} : \alpha \to \alpha^\text{op}$ for the inverse map). More precisely: 1. The forward map takes a matrix $M \in \text{Mat}_m(\alpha^\text{op})$, transposes it to get $M^\top$, then applies $\text{unop}$ to each entry to obtain a matrix in $\text{Mat}_m(\alpha)$, and finally takes the opposite of this matrix. 2. The inverse map reverses this process: it takes a matrix $M \in (\text{Mat}_m(\alpha))^\text{op}$, applies $\text{unop}$ to get a matrix in $\text{Mat}_m(\alpha)$, transposes it, and applies $\text{op}$ to each entry to obtain a matrix in $\text{Mat}_m(\alpha^\text{op})$. This isomorphism preserves both the additive and multiplicative structures of the matrix algebras, as well as the scalar multiplication by elements of $R$.
Matrix.transposeAddEquiv definition
[Add α] : Matrix m n α ≃+ Matrix n m α
Full source
/-- `Matrix.transpose` as an `AddEquiv` -/
@[simps apply]
def transposeAddEquiv [Add α] : MatrixMatrix m n α ≃+ Matrix n m α where
  toFun := transpose
  invFun := transpose
  left_inv := transpose_transpose
  right_inv := transpose_transpose
  map_add' := transpose_add
Matrix transpose as an additive equivalence
Informal description
The additive equivalence (additive isomorphism) between the space of matrices $\mathrm{Matrix}\, m\, n\, \alpha$ and $\mathrm{Matrix}\, n\, m\, \alpha$, where the operation is given by matrix transpose. This equivalence preserves addition, meaning that for any two matrices $A$ and $B$ in $\mathrm{Matrix}\, m\, n\, \alpha$, the transpose of their sum equals the sum of their transposes: $(A + B)^\top = A^\top + B^\top$.
Matrix.transposeAddEquiv_symm theorem
[Add α] : (transposeAddEquiv m n α).symm = transposeAddEquiv n m α
Full source
@[simp]
theorem transposeAddEquiv_symm [Add α] : (transposeAddEquiv m n α).symm = transposeAddEquiv n m α :=
  rfl
Inverse of Matrix Transpose Additive Equivalence
Informal description
For any type $\alpha$ equipped with an addition operation, the inverse of the additive equivalence given by matrix transpose from $\mathrm{Matrix}\, m\, n\, \alpha$ to $\mathrm{Matrix}\, n\, m\, \alpha$ is equal to the additive equivalence given by matrix transpose from $\mathrm{Matrix}\, n\, m\, \alpha$ to $\mathrm{Matrix}\, m\, n\, \alpha$. In other words, $(A^\top)^\top = A$ holds for any matrix $A$.
Matrix.transpose_list_sum theorem
[AddMonoid α] (l : List (Matrix m n α)) : l.sumᵀ = (l.map transpose).sum
Full source
theorem transpose_list_sum [AddMonoid α] (l : List (Matrix m n α)) :
    l.sumᵀ = (l.map transpose).sum :=
  map_list_sum (transposeAddEquiv m n α) l
Transpose of Sum of Matrices Equals Sum of Transposes for Lists
Informal description
For any additive monoid $\alpha$ and any list $l$ of matrices in $\mathrm{Matrix}\, m\, n\, \alpha$, the transpose of the sum of the matrices in $l$ is equal to the sum of the transposes of the matrices in $l$. In symbols: $$(\sum_{A \in l} A)^\top = \sum_{A \in l} A^\top.$$
Matrix.transpose_multiset_sum theorem
[AddCommMonoid α] (s : Multiset (Matrix m n α)) : s.sumᵀ = (s.map transpose).sum
Full source
theorem transpose_multiset_sum [AddCommMonoid α] (s : Multiset (Matrix m n α)) :
    s.sumᵀ = (s.map transpose).sum :=
  (transposeAddEquiv m n α).toAddMonoidHom.map_multiset_sum s
Transpose of Multiset Sum of Matrices Equals Sum of Transposes
Informal description
For any additive commutative monoid $\alpha$ and any multiset $s$ of matrices in $\mathrm{Matrix}\, m\, n\, \alpha$, the transpose of the sum of the matrices in $s$ is equal to the sum of the transposes of the matrices in $s$. That is, $(\sum_{M \in s} M)^\top = \sum_{M \in s} M^\top$.
Matrix.transpose_sum theorem
[AddCommMonoid α] {ι : Type*} (s : Finset ι) (M : ι → Matrix m n α) : (∑ i ∈ s, M i)ᵀ = ∑ i ∈ s, (M i)ᵀ
Full source
theorem transpose_sum [AddCommMonoid α] {ι : Type*} (s : Finset ι) (M : ι → Matrix m n α) :
    (∑ i ∈ s, M i)ᵀ = ∑ i ∈ s, (M i)ᵀ :=
  map_sum (transposeAddEquiv m n α) _ s
Transpose of Sum of Matrices Equals Sum of Transposes
Informal description
Let $\alpha$ be an additive commutative monoid, $\iota$ a type, and $s$ a finite subset of $\iota$. For a family of matrices $M : \iota \to \text{Matrix}\, m\, n\, \alpha$, the transpose of the sum of $M$ over $s$ is equal to the sum of the transposes of $M$ over $s$. That is, $$ \left( \sum_{i \in s} M_i \right)^\top = \sum_{i \in s} M_i^\top. $$
Matrix.transposeLinearEquiv definition
[Semiring R] [AddCommMonoid α] [Module R α] : Matrix m n α ≃ₗ[R] Matrix n m α
Full source
/-- `Matrix.transpose` as a `LinearMap` -/
@[simps apply]
def transposeLinearEquiv [Semiring R] [AddCommMonoid α] [Module R α] :
    MatrixMatrix m n α ≃ₗ[R] Matrix n m α :=
  { transposeAddEquiv m n α with map_smul' := transpose_smul }
Matrix transpose as a linear equivalence
Informal description
For a semiring $R$ and an additive commutative monoid $\alpha$ equipped with a module structure over $R$, the linear equivalence between the space of matrices $\mathrm{Matrix}\, m\, n\, \alpha$ and $\mathrm{Matrix}\, n\, m\, \alpha$ is given by matrix transpose. This equivalence preserves both the module structure and addition, meaning that for any scalar $r \in R$ and matrices $A, B \in \mathrm{Matrix}\, m\, n\, \alpha$, we have $(r \cdot A)^\top = r \cdot A^\top$ and $(A + B)^\top = A^\top + B^\top$.
Matrix.transposeLinearEquiv_symm theorem
[Semiring R] [AddCommMonoid α] [Module R α] : (transposeLinearEquiv m n R α).symm = transposeLinearEquiv n m R α
Full source
@[simp]
theorem transposeLinearEquiv_symm [Semiring R] [AddCommMonoid α] [Module R α] :
    (transposeLinearEquiv m n R α).symm = transposeLinearEquiv n m R α :=
  rfl
Inverse of Matrix Transpose Linear Equivalence is Transpose with Swapped Indices
Informal description
For a semiring $R$ and an additive commutative monoid $\alpha$ equipped with a module structure over $R$, the inverse of the linear equivalence given by matrix transpose from $\mathrm{Matrix}\, m\, n\, \alpha$ to $\mathrm{Matrix}\, n\, m\, \alpha$ is equal to the linear equivalence given by matrix transpose from $\mathrm{Matrix}\, n\, m\, \alpha$ to $\mathrm{Matrix}\, m\, n\, \alpha$. In other words, the inverse of the transpose linear equivalence is the transpose linear equivalence with row and column indices swapped.
Matrix.transposeRingEquiv definition
[AddCommMonoid α] [CommSemigroup α] [Fintype m] : Matrix m m α ≃+* (Matrix m m α)ᵐᵒᵖ
Full source
/-- `Matrix.transpose` as a `RingEquiv` to the opposite ring -/
@[simps]
def transposeRingEquiv [AddCommMonoid α] [CommSemigroup α] [Fintype m] :
    MatrixMatrix m m α ≃+* (Matrix m m α)ᵐᵒᵖ :=
  { (transposeAddEquiv m m α).trans MulOpposite.opAddEquiv with
    toFun := fun M => MulOpposite.op Mᵀ
    invFun := fun M => M.unopᵀ
    map_mul' := fun M N =>
      (congr_arg MulOpposite.op (transpose_mul M N)).trans (MulOpposite.op_mul _ _)
    left_inv := fun M => transpose_transpose M
    right_inv := fun M => MulOpposite.unop_injective <| transpose_transpose M.unop }
Ring equivalence between square matrices and their opposite via transpose
Informal description
Given a type $\alpha$ that is an additive commutative monoid and a commutative semigroup, and a finite type $m$, the matrix transpose operation defines a ring equivalence between the ring of square matrices $\text{Matrix}\, m\, m\, \alpha$ and its multiplicative opposite ring $(\text{Matrix}\, m\, m\, \alpha)^\text{op}$. This equivalence maps a matrix $M$ to the opposite of its transpose $\text{op}(M^\top)$, and its inverse maps the opposite of a matrix $N$ to the transpose of $N$'s underlying matrix $N^\top$. The equivalence preserves addition and multiplication, where multiplication in the opposite ring is defined by reversing the order of operands.
Matrix.transpose_pow theorem
[CommSemiring α] [Fintype m] [DecidableEq m] (M : Matrix m m α) (k : ℕ) : (M ^ k)ᵀ = Mᵀ ^ k
Full source
@[simp]
theorem transpose_pow [CommSemiring α] [Fintype m] [DecidableEq m] (M : Matrix m m α) (k : ) :
    (M ^ k)ᵀ = Mᵀ ^ k :=
  MulOpposite.op_injective <| map_pow (transposeRingEquiv m α) M k
Transpose of Matrix Power Equals Power of Transpose: $(M^k)^\top = (M^\top)^k$
Informal description
Let $\alpha$ be a commutative semiring, $m$ a finite type with decidable equality, and $M$ a square matrix in $\mathrm{Matrix}\, m\, m\, \alpha$. For any natural number $k$, the transpose of $M^k$ equals the $k$-th power of the transpose of $M$, i.e., $(M^k)^\top = (M^\top)^k$.
Matrix.transpose_list_prod theorem
[CommSemiring α] [Fintype m] [DecidableEq m] (l : List (Matrix m m α)) : l.prodᵀ = (l.map transpose).reverse.prod
Full source
theorem transpose_list_prod [CommSemiring α] [Fintype m] [DecidableEq m] (l : List (Matrix m m α)) :
    l.prodᵀ = (l.map transpose).reverse.prod :=
  (transposeRingEquiv m α).unop_map_list_prod l
Transpose of Matrix Product Equals Reverse Product of Transposes
Informal description
Let $\alpha$ be a commutative semiring, $m$ a finite type with decidable equality, and $l$ a list of square matrices over $\alpha$ of size $m \times m$. The transpose of the product of matrices in $l$ is equal to the product of the transposes of the matrices in $l$ taken in reverse order. That is, $$(l_1 \cdots l_n)^\top = l_n^\top \cdots l_1^\top.$$
Matrix.transposeAlgEquiv definition
[CommSemiring R] [CommSemiring α] [Fintype m] [DecidableEq m] [Algebra R α] : Matrix m m α ≃ₐ[R] (Matrix m m α)ᵐᵒᵖ
Full source
/-- `Matrix.transpose` as an `AlgEquiv` to the opposite ring -/
@[simps]
def transposeAlgEquiv [CommSemiring R] [CommSemiring α] [Fintype m] [DecidableEq m] [Algebra R α] :
    MatrixMatrix m m α ≃ₐ[R] (Matrix m m α)ᵐᵒᵖ :=
  { (transposeAddEquiv m m α).trans MulOpposite.opAddEquiv,
    transposeRingEquiv m α with
    toFun := fun M => MulOpposite.op Mᵀ
    commutes' := fun r => by
      simp only [algebraMap_eq_diagonal, diagonal_transpose, MulOpposite.algebraMap_apply] }
Algebra equivalence between square matrices and their opposite via transpose
Informal description
Given a commutative semiring $R$, a commutative semiring $\alpha$, a finite type $m$ with decidable equality, and an algebra structure $R \to \alpha$, the matrix transpose operation defines an algebra equivalence between the algebra of square matrices $\text{Matrix}\, m\, m\, \alpha$ and its multiplicative opposite algebra $(\text{Matrix}\, m\, m\, \alpha)^\text{op}$. This equivalence maps a matrix $M$ to the opposite of its transpose $\text{op}(M^\top)$, and preserves the algebra structure, meaning it commutes with scalar multiplication by elements of $R$.