## Table of Contents

For such a long time, I have it on my list to write something for
Toeplitz matrix. I want to derive the fundamental theorems on the
asymptotic behavvior of eigenvalues, inverses, and products of banded
Toeplitz matrices. To achieve this goal, I read the famous paper
*Toeplitz and Circulant Matrices: A review* by Robert M. Gray
^{Gray_2005} . I am fond of matrices with the style like Toeplitz
matrices imagining that there must be something special existing in
the square matrices. During writing this post, Matlab is also used to
finish some mathematical test.

## 1 Toeplitz Matrices

A Toeplitz matrix \(T\) is an square matrix with size \(n\times n\). \(T_{n} = [t_{k,j}; k,j= 0,1,\ldots,n-1]\) where \(t_{k,j} = t_{k-j}\).

In matrix form, A toeplitz matrix can be expressed as:

\begin{equation}
\label{eq:1}
T_{n} =
\begin{pmatrix}
t_{0} & t_{-1} & t_{-2} &\ldots & t_{-(n-1)} \\

t_{1} & t_{0} & t_{-1} & \ldots &\vdots \\

t_{2} & t_{1} & t_{0} & \ldots &\vdots \\

\vdots & \vdots &\vdots & \ddots & \vdots \\

t_{n-1} & \ldots & \ldots & \ldots & t_{0}
\end{pmatrix}
\end{equation}

Because we dive into the details of Toeplitz matrix, some keypoints of matrices are recaptured, such as the eigenvalues, Norms and asymptotically equivalent matrices.

## 2 Some Asymptotic Properties of Matrices

Most often, we can study the asymptotic behavior of complicated matrices by studying a more structured and simplier asymptotically equivalent matirx. This method is used in many fields. A toy model is studied comprehensively before a more complicated one comes in.

### 2.1 Eigenvalues

Scalor \(\alpha\) and vector \(x\) satisfied \(Ax = \alpha x\) are called the eigenvalue and the corresponding eigenvector. We can get the eigenvalues by solving

\begin{equation} \label{eq:2} \det(A-\alpha I) = 0 \end{equation}

For the sake of simplicity, we assume that the eigenvalues are ordered in noincreasing format, i.e. \(\alpha_{1} > \alpha_{2} > \ldots \).

Any complex matrix \(A\) can be written as:

\begin{equation} \label{eq:3} A = URU^{*} \end{equation}

where the asterisk \(*\) denotes conjugate transpose, \(U\) is unitary, i.e., \(U^{-1} = U^{*}\) and \(R\) is an upper triangular matrix. The eigenvalues of \(A\) are the principle diagonal elements of \(R\). if \(A\) is normal, i.e. if\(A^{*}A = AA^{*}\), then \(R\) is a diagonal matrix, which we denote as \(R = diag(\alpha_{k})\). If \(A\) is Hermitian, i.e. if \(A^{*} = A\), then the eigenvalues are real. If a matrix is Hermitian, then it is also normal.

For the case of Hermitian matrices, a useful description of the eigenvalues is the variational description given by the Courant-Fischer theorem.

Define the Rayleigh quotient of an Hermitian matrix \(X\) and a vector (complex \(n\)-tuple ) \(x\) by

\begin{equation} \label{eq:4} R_{H}(x) = (x^{*}Hx)/ (x^{*}x) \end{equation}

Let \(\eta_{M}\) and \(\eta_{m}\) be the maximum and minimum eigenvalues of \(H\), respectively.

Then

\begin{eqnarray}
\label{eq:5}
\eta_{m}&=& \min_{x} R_{H}(x) = \min_{x:x^{*}x = 1} x^{*}Hx \\

\eta_{M}&=& \max_{x} R_{H}(x) = \max_{x:x^{* }x = 1} x^{*}Hx
\end{eqnarray}

This corollary will be useful in specifying the interval containing the eigenvalues of an Hermitian matrix.

The following lemma is usefull when studying non-Hermitian matrices and products of Hermitian matrices.

Let \(A\) be a matrix with eigenvalues \(\alpha_{k}\). Define the eigenvalues of the Hermitian matrix \(A^{*}A\) to be \(\lambda_{k}\). Then:

\begin{equation} \label{eq:6} \sum_{k=0}^{n-1} \lambda_{k} \geq \sum_{k=0}^{n-1}|\alpha_{k}|^{2} \end{equation}

with equality iff \(A\) is normal.

Proff. The trace of a matrix is the sum of the diagonal elements of a matrix. The trace is invariant to unitary operations so that it also is equal to the sum of the eigenvalues of amatrix. i.e.,

\begin{equation} \label{eq:7} Tr (A^{*}A) = \sum_{k=0}^{n-1}(A^{*}A)_{k,k} = \sum_{k=0}^{n-1} \lambda_{k} \end{equation}

We have

\begin{eqnarray*}
Tr( {A^{*}A} ) &=& Tr( {R^{*}R} ) \\

&=& \sum_{k=0}^{n-1}\sum_{j=0}^{n-1} |r_{j,k}|^{2} \\

&=& \sum_{k=0}^{n-1} |\alpha_{k}|^{2} + \sum_{k\neq j} |r_{j,k}|^{2} \\

&\geq& \sum_{k=0}^{n-1} |\alpha_{k}|^{2}
\end{eqnarray*}

### 2.2 Matrix Norms

To study the asymptotic equivalence of matrices we require a metric or equivalently a norm of the appropriate kind. Here we use two norms the operator and the Hilbert-Schmidt. We call the former the strong norm and later the weak norm.

Let \(A\) be a matrix with eigenvalues \(\alpha_{k}\) and let \(\lambda_{k}\) be the eigenvalues of the Hermitian matrix \(A^{*}A\). The strong norm \(\|A\|\) is difined by:

\begin{equation}
\label{eq:8}
\|A\| = \max_{x} R_{A^{*}A}(x)^{^{1}⁄_{2}} = \max_{x:x^{*}x=1} [x^{*}A^{*}Ax]^{^{1}⁄_{2}}
\end{equation}

According the corollary mentioned before, we have:

\begin{equation} \label{eq:9} \|A\|^{2} = \max_{k}\lambda_{k} = \lambda_{M} \end{equation}

The strong norm of \(A\) can be bounded below by letting \(e_{M}\) be the eigenvector of \(A\) corresponding to \(\alpha_{M}\), the eigenvalue of \(A\) having largest absolute value:

\begin{equation} \label{eq:10} \|A\|^{2} = \max_{x:x^{*}x = 1} x^{*}A^{*}Ax \geq (e_{M}^{*}A^{*}) (Ae_{M}) = |\alpha_{M}|^{2} \end{equation}

If \(A\) is itself Hermitian, then its eigenvalues \(\alpha_{k}\) are real and the eigenvalues \(\lambda_{k}\) of \(A^{*}A\) are simply \(\lambda_{k} = \alpha_{k}^{2}\). This follows since if \(e^{(k)}\) is an eigenvector of \(A\) with eigenvalue \(\alpha_{k}\), then \(A^{*}Ae^{(k)} = \alpha_{k}A^{*}e^{(k)} = \alpha_{k}^{2}e^{(k)}\). Thus, in particular, if \(A\) is Hermitian then

\begin{equation} \label{eq:11} \|A\| = \max_{k} |\alpha_{k}| = |\alpha_{M}| \end{equation}

The Hilbert-Schmidt norm of an \(n\times n\) matrix \(A = {a_{k,j}}\) is defined by:

\begin{eqnarray*}
|A| &=& \bigg(n^{-1}\sum_{k=0}^{n-1} \sum_{j=0}^{n-1} |a_{k,j}|^{2}\bigg)^{\tfrac{1}{2}} \\

&=& \bigg(n^{-1} Tr[A^{*}A] \bigg)^{\tfrac{1}{2}} \\

&=& \bigg( n^{-1} \sum_{k=0}^{n-1} \lambda_{k} \bigg)^{\tfrac{1}{2}}
\end{eqnarray*}