May 13, 2024

Easily Solve the Lyapunov Equation by Using the Kronecker Product – Detailed Explanation with Python Codes


In this control engineering and control theory tutorial, we explain how to easily solve the Lyapunov equation by using the Kronecker product. We provide a detailed explanation with Python codes.

The YouTube video tutorial accompanying this page is given here. All the codes are posted on our GitHub page given here.

Background on Lyapunov Equation and Kronecker Product

The Lyapunov equation for continuous-time systems has the following form:

(1)   \begin{align*}A^{T}X+XA=-Q\end{align*}

where A and Q are known matrices, and X is the matrix that we want to determine. That is, we want to solve this system with respect to X. Often, it is assumed that the matrix Q is positive definite, and often the matrix A is a system matrix of a linear dynamical system, such as the one described below

(2)   \begin{align*}\dot{\mathbf{x}}=A\mathbf{x}\end{align*}

where \mathbf{x} is the state vector. The Lyapunov equation and its solution X are extensively used in control theory and control engineering practive. For example, the Lyapunov equation is important for stability analysis of dynamical systems, control system design, and for optimal control. It is important to emphasize that the solution of the Riccati matrix equation can be obtained by solving a series of Lyapunov equations. Consequently, the Lyapunov equation is very important for designing optimal controllers and for evaluating their stability. Here is the problem we want to solve:

Problem: Given the matrices A and Q, determine the solution X of the Lyapunov equation (1).

In a number of scenarios, we are not only seaching for any solution of the Lyapunov equation. Often, we are searching for a symmetric positive definite matrix X. However, since this an introductory tutorial, we will not add this additional layer of complexity to the above problem. In fact, the solution found by the method presented in the sequel will actually be a symmetric and positive definite matrix. This is mainly because the particular Lyapunov equation that we will consider will have a unique solution and this unique solutin is a postive definite symmetric matrix.

Now, let us focus on the Kronecker product. Let A and B arbitrary matrices (A matrix in the following equation is not related to A matrix in the Lyapunov equation):

(3)   \begin{align*}A=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}  \end{bmatrix}, \;\; B=\begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22}  \end{bmatrix}\end{align*}

Then, the Kronecker product is defined by following matrix

(4)   \begin{align*}C=A\otimes B=\begin{bmatrix} a_{11}B & a_{12}B \\ a_{21}B & a_{22}B  \end{bmatrix}=\begin{bmatrix} a_{11}b_{11} & a_{11}b_{12} &   a_{12}b_{11} & a_{12}b_{12} \\  a_{11}b_{21} & a_{11}b_{22} &   a_{12}b_{21} & a_{12}b_{22} \\ a_{21}b_{11} & a_{21}b_{12} &   a_{22}b_{11} & a_{22}b_{12} \\a_{21}b_{21} & a_{21}b_{22} &   a_{22}b_{21} & a_{22}b_{22}  \end{bmatrix}\end{align*}

where \otimes is the symbol for the Kronecker product. The definition of the Kronecker product that is shown below is for 2 by 2 matrices. However, we can easily generalize the definition to matrices of arbitrary dimensions. Next, we need to introduce the vectorization operator. The vectorization operator creates a vector out of a matrix by stacking its column on top of each other. More precisely, let \text{vec}(\cdot) be the notation for the vectorization operator. Then, we have

(5)   \begin{align*}\text{vec}(A)=\begin{bmatrix} a_{11} \\ a_{21} \\ a_{12} \\ a_{22}  \end{bmatrix},\;\; \text{vec}(B)=\begin{bmatrix} b_{11} \\ b_{21} \\ b_{12} \\ b_{22}  \end{bmatrix}\end{align*}

The vectorization operator and Kronecker product are related to each other. Namely, let us consider the following matrix equation

(6)   \begin{align*}AXB=C\end{align*}

Let us apply the vectorization operator to both left-hand and right-hand sides of this equation:

(7)   \begin{align*}\text{vec}(AXB)=\text{vec}(C)\end{align*}

Now, it can be shown that the following formula is valid

(8)   \begin{align*}\text{vec}(AXB)=\big(B^{T}\otimes A \big)\text{vec}(X)=\text{vec}(C)\end{align*}

This formula is very important since if we want to solve the matrix equation (6) for the matrix X, we can do that by solving the standard system of nonlinear equations given by the equation (8), where \text{vec}(X) is now a vector.

Solve the Lyapunov Equation by Using Kronecker Product

here, we will apply everything we learned in the previous section to the problem of solving the Lyapunog equation. Let us first apply the vectorization operator to our Lyapunov equation (1)

(9)   \begin{align*}\text{vec}\big( A^{T}X+XA \big)=-\text{vec}\big( Q\big) \end{align*}

Since the vectorization operator is a linear operator, we have

(10)   \begin{align*}\text{vec}\big( A^{T}X\big) +\text{vec}\big( XA \big)=-\text{vec}\big( Q\big) \end{align*}

Next, by using the formula (8), we have

(11)   \begin{align*}&\text{vec}\big( A^{T}X\big) =\text{vec}\big( A^{T}X I\big)= \big( I\otimes A^{T}\big)\text{vec}\big(X\big) \\&\text{vec}\big( XA\big)=\text{vec}\big( IXA\big)=\big( A^{T} \otimes I \big)\text{vec}\big(X\big)\end{align*}

where I is an identity matrix. By substituting these two formulas in (10), we obtain

(12)   \begin{align*}\Big( I\otimes A^{T}+ A^{T} \otimes I  \Big) \text{vec}\big(X\big)= -\text{vec}\big( Q\big) \end{align*}

Let us introduce the matrix S as follows

(13)   \begin{align*}S= I\otimes A^{T}+ A^{T} \otimes I  \end{align*}

Then, the last system can be writen as

(14)   \begin{align*}S \text{vec}\big(X\big)= -\text{vec}\big( Q\big) \end{align*}

Under the assumption that the matrix S is invertible, the solution of this system is given by

(15)   \begin{align*}\text{vec}\big(X\big)= -S^{-1}\text{vec}\big( Q\big) \end{align*}

The solution (15) is the solution of our Lyapunov equation. Once we determine this solution we can de-vectorize \text{vec}\big(X\big) and form the matrix X. Next, we explain the Python codes for solving the Lyapunov equation.

Python Codes for Solving the Lyapunov Equation

First, we explain how to compute the Kronecker product in Python, and how to perform the vectorization and de-vectorization. The following code lines are self-explanatory and they explain how to compute the Kronecker product, as well as how to vectorize and de-vecotorize matrices and vectors.

import numpy as np 
import control as ct

# test of the Kronecker product in Python 

Atest=np.array([[1,2],[3,4]])
Btest=np.array([[2,1],[3,4]])
Cmatrix=np.kron(Atest,Btest)

# test of vectorization
AtestVectorized=Atest.flatten('F').reshape(4,1)

np.matmul(Cmatrix,AtestVectorized)

# test of de-vectorization 

Atest1=AtestVectorized.reshape(2,2,order='F')

Here, we need to mention that we are using the NumPy library for matrix computations, and we are using the Python control systems library in order to construct the matrix A of a stable system. We import the control systems library as ct. You can install the control system library from a terminal or from an Anaconda terminal by typing “pip install control“.

The following Python code lines are used to construct a stable transfer function, and then to transform this transfer function into a state-space model, and then to extract the matrix A from this state-space model. This matrix A is used in the Lyapunov equation as the coefficient matrix. To construct the stable transfer funtion, we using the function “ct.zpk()”. This function will construct a transfer function with specified zeros, poles, and gain (“zpk”- zero, pole, and gain).

# now test the Lyapunov Equation solution

# define a stable transfer function 
tfTest = ct.zpk([1], [-1,-2,-3], gain=1)
print(tfTest)       

# create a state-space model
ssTest=ct.tf2ss(tfTest)
print(ssTest)

# extract the A matrix
A=ssTest.A

The following piece of code will first verify that the matrix A is stable by computing its eigenvalues. Them, the code will construct a positive definite matrix Q that is the right-hand side of the Lyapunov equation. Then, the code will vectorize the Lyapunov equation and compute its vectorized solution (15). Then, this solution will be de-vectorized to obtain the final matrix X. Then, the code will verify the accuracy of the solution by computing

(16)   \begin{align*}A^{T}X+XA+Q\end{align*}

This complete expression should be equal to zero or very close to zero if the solution X is accurately computed. In that way, we can quantify how accurate is the solution X.

# check the eigenvalues of A 
np.linalg.eig(A)

# create the matrix Q that is positive definite
Q=np.array([[10, -0.2, -0.1],[-0.2, 20, -0.2],[-0.1, -0.2, 3]])


# Lyapunov equation A'*X+X*A=-Q

Qvectorized=Q.flatten('F').reshape(9,1)

S=np.kron(np.eye(3,3),A.T)+np.kron(A.T,np.eye(3,3))

Xvectorized=-np.matmul(np.linalg.inv(S),Qvectorized)

X=Xvectorized.reshape(3,3,order='F')

# check the eigenvalues 
np.linalg.eig(X)

# confirm that the solution actually satisfies 
# Lyapunov equation A'*X+X*A=-Q
# LHS=A'*X+X*A
# LHS+Q=0 -> equivalent form

# the Lyapunov equation 
LHS=np.matmul(A.T,X)+np.matmul(X,A)

# this should be zero or close to zero if X is the 
# correct solution 
LHS+Q