December 16, 2024

Correct Explanation of State Observers for State Estimation of State-Space Models with Python Simulations


In this control engineering and control theory tutorial, we provide a correct and detailed explanation of state observers that are used for state estimation of linear dynamical systems in the state-space form. We also explain how to implement and simulate observers in Python. The main motivation for creating this video tutorial comes from the fact that in a number of control engineering books and online tutorials, observers are presented from the mathematical perspective without providing enough practical insights and intuitive explanations of observers and state estimators. In contrast, in this control engineering tutorial, we provide an intuitive explanation of observers as well as a concise mathematical explanation. The observer developed in this tutorial belongs to the class of Luenberger observers. We use the pole placement method to design the observer gain matrix. We also explain the concept of duality between the observer design and feedback control problems. Finally, we design, implement, and simulate the observer in Python by using the Control Systems Library and the function place(). The complete Python code files developed in this tutorial are given here.

The YouTube tutorial accompanying this lecture is given below.

Motivational Example

As a motivational example, we consider a system shown in the figure below.

Figure 1: System of two objects connected with springs and dampers.

This system consists of two objects with masses m_{1} and m_{2} that are connected by springs and dampers. We assume that spring constants k_{1} and k_{2} are given, as well as damping constants d_{1} and d_{2}. The force F is acting on the second object and since the second object is connected to the first object, it moves both objects. The position of the first object with respect to its equilibrium point is denoted by s_{1}, and the position of the second object with respect to its equilibrium point is denoted by s_{2}. We assume that the position s_{1} is observed by a distance sensor. Let \dot{s}_{1} and \dot{s}_{2} denote the velocities of the first and second objects, respectively. By introducing the state-space variables

(1)   \begin{align*}x_{1}=s_{1},\;\; x_{2}=\dot{s}_{1},\;\; x_{3}=s_{2},\;\;x_{4}=\dot{s}_{2}\end{align*}

The complete state vector of the system is given by

(2)   \begin{align*}\mathbf{x}=\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4}  \end{bmatrix} = \begin{bmatrix} s_{1} \\ \dot{s}_{1} \\ s_{2} \\ \dot{s}_{2}  \end{bmatrix}\end{align*}

We want to provide the answer to the following question and solve the state estimation problem formulated below:

  1. Can we uniquely estimate the complete state vector by observing the time sequence of the position s_{1} (state variable x_{1})?
  2. If yes, design a state estimator (observer) that will continuously reconstruct or estimate the complete state vector \mathbf{x}(t) given by (2), from the observed time sequence of the position s_{1}.

To solve both of these two problems, we need to know the model of the system. In our previous tutorial given here, we derived a state-space model of the system and we explained how to simulate the state-space model in Python. The state space model has the following form

(3)   \begin{align*}\dot{\mathbf{x}} & =A\mathbf{x}+B\mathbf{u} \\\mathbf{y}& = C\mathbf{x}\end{align*}

where

(4)   \begin{align*}& \mathbf{x}=\begin{bmatrix}x_{1} \\ x_{2} \\ x_{3} \\ x_{4}\end{bmatrix},\; \mathbf{u}=\begin{bmatrix}F \end{bmatrix},\;\; A=\begin{bmatrix} 0 & 1 & 0 & 0  \\ -\frac{k_{1}+k_{2}}{m_{1}} & -\frac{d_{1}+d_{2}}{m_{1}} & \frac{k_{2}}{m_{1}} & \frac{d_{2}}{m_{1}} \\ 0 & 0 & 0 & 1 \\ \frac{k_{2}}{m_{2}} & \frac{d_{2}}{m_{2}} & -\frac{k_{2}}{m_{2}} & -\frac{d_{2}}{m_{2}}  \end{bmatrix},\;\; \\ & B=\begin{bmatrix} 0 \\ 0 \\ 0 \\ \frac{1}{m{2}} \end{bmatrix},\;\; C=\begin{bmatrix}1 & 0 & 0 & 0  \end{bmatrix}\end{align*}

The answer to question 1 is related to the problem of observability of the state-space model (3). In our tutorial given here, we thoroughly explained the observability problem and the concept of observability of dynamical systems. Briefly speaking, if the system with its state-space model (3) is observable, then we can uniquely estimate the state vector \mathbf{x} from the sequence or time series of the output measurements.

Here, for presentation clarity, it is important to state the observability condition for linear systems in the most general form

(5)   \begin{align*}\dot{\mathbf{x}} & =A\mathbf{x}+B\mathbf{u} \\\mathbf{y} & =C\mathbf{x}\end{align*}

where A\in \mathbb{R}^{n\times n}, B\in \mathbb{R}^{n\times m}, and C\in \mathbb{R}^{r} are the system matrices, \mathbf{x}\in \mathbb{R}^{n} is the n-dimensional state vector, \mathbf{u}\in \mathbb{R}^{m} is the m-dimensional input vector, and \mathbf{y}\in \mathbb{R}^{r} is r-dimensional output vector. The observability matrix has the following form

(6)   \begin{align*}O_{n-1}=\begin{bmatrix}C \\ CA \\ CA^{2} \\ \vdots \\ CA^{n-1} \end{bmatrix}\end{align*}

where it should be kept in mind that n is the state order of the system. Here, it is important to emphasize that in our previous tutorial given here, we derived the observability condition for the discrete-time systems. However, in this tutorial, we consider continuous-time systems. It turns out that the observability rank condition is the same for both discrete-time and continuous-time systems. That is, it boils down to the problem of testing the rank of the observability matrix given by (7). The system (5) is observable if and only if the rank of the observability matrix is equal to the state order n.

Let us now go back to the double mass-spring-damper system from the beginning of this tutorial. Since, n=4 (we have 4 state variables), to test observability, we construct the observability matrix

(7)   \begin{align*}O_{3}=\begin{bmatrix}C \\ CA \\ CA^{2} \\ CA^{3} \end{bmatrix}\end{align*}

and we check the numerical rank of this matrix. The system is observable if and only if the rank of the observability matrix is equal to the state order. In our case, the state order is n=4 (we have four state variables), and consequently, the rank of the observability matrix (7) should be equal to 4. One of the approaches for checking the numerical rank of this matrix is to compute the Singular Value Decomposition (SVD) of this matrix and to check the magnitude of the smallest singular value. If the smallest singular value is not close to zero, then we can conclude that the system is numerically observable. In MATLAB or Python, we can also use the rank function to compute matrix rank. However, the rank function will only provide us with a YES or NO answer, without providing us with more insights into the degree of observability. That is, how far is our matrix from losing the full rank, and consequently, how far is our system from an unobservable system? For example, if the computed singular value of the observability matrix is 10^{-12} or a smaller number, then we can conclude that from the practical point of view, the system is poorly observable or practically unobservable. We will explain how to test the observability condition in Python later on in this tutorial. For the time being, it is important to know that indeed, the system is observable by only measuring the position s_{1}.

Now, we need to address the second problem, that is, we need to design an estimator that will continuously estimate the state vector from the sequence of the measurements of s_{1}. This is the main topic of this tutorial that is thoroughly explained in this tutorial.

Observers for State Estimation of Dynamical Systems

In this section, we address the second problem stated in the previous section:

Design a state estimator (observer) that will continuously estimate the complete state vector \mathbf{x}(t) from the time sequence of the output measurements \mathbf{y}(t).

Here, for clarity, we repeat the state-space model

(8)   \begin{align*}\dot{\mathbf{x}} & =A\mathbf{x}+B\mathbf{u} \\\mathbf{y} & =C\mathbf{x}\end{align*}

With the state-space model (8), we associate a state observer that has the following form

(9)   \begin{align*}\dot{\hat{\mathbf{x}}} & =A\hat{\mathbf{x}}+B\mathbf{u}+L(\mathbf{y}-C\hat{\mathbf{x}}) \\\end{align*}

OVER HERE IT IS VERY IMPORTANT TO EMPHASIZE THAT THE OBSERVER (9) IS ACTUALLY AN ALGORITHM THAT CAN BE IMPLEMENTED IN CONTROL HARDWARE. OVER HERE, THIS ALGORITHM IS REPRESENTED AS A CONTINUOUS TIME-STATE EQUATION (DIFFERENTIAL EQUATION). IN PRACTICE, IT IS OFTEN DISCRETIZED AND IMPLEMENTED AS A DISCRETE-TIME FILTER IN CONTROL HARDWARE.

Here is the explanation of the observer vectors and matrices:

  • \hat{\mathbf{x}} is the state of the observer. The goal of the observer is to approximate and track the state of the system \mathbf{x} over time. That is, \hat{\mathbf{x}}(t) should accurately approximate and track the state of the system \mathbf{x}(t), starting from some initial guess of the state, denoted by \hat{\mathbf{x}}(0)=\hat{\mathbf{x}}_{0}. Here, it should also be kept in mind that the “hat” notation above the state of the observer is deliberately introduced in order to distinguish the state of the system from the state of the observer since these are two different quantities.
  • The matrix L\in \mathbb{R}^{n\time r} (r is the output dimension and n is the state dimension) is the observer gain matrix that should be designed by the user (more about this in the sequel).
  • The term C\hat{\mathbf{x}} is the prediction of the system’s output made by the observer. This is because the state of the observer is propagated through the output matrix.
  • The term L(\mathbf{y}-C\hat{\mathbf{x}}) is the observer correction term. This term is called the correction term because of the following. If the observer state \hat{\mathbf{x}} is equal to the system state \mathbf{x}, then the correction term is zero. This is because the observer uses the output matrix C of the system, and consequently the output prediction made by the observer is equal to the output of the system \mathbf{y}. Consequently, since the observer correction term is zero, the state of the observer is propagated through the observer dynamics (9) in the same manner as the true state of the system (8). In this case, the observer will perfectly track the state of the system. However, if the term \mathbf{y}-C\hat{\mathbf{x}} is different from zero, the observer correction term will be different from zero, and the observer state dynamics (9) will be influenced by the correction term that serves as an additional input. The goal of this input is to try to correct the state of the observer in the next time instant \hat{\mathbf{x}} such that it becomes closer to the actual state of the system \mathbf{x}.

The observer diagram is shown below.

Figure 2: Block diagram of the observer.

The observer inputs are the system control input \mathbf{u}, and the output of the system \mathbf{y}. The output of the observer is the state estimate \hat{\mathbf{x}}.

Here, again, we will repeat the main goal of the observer.

The goal of the observer is to approximate and track the state of the system \mathbf{x} over time. That is, \hat{\mathbf{x}}(t) should accurately approximate and track the state of the system \mathbf{x}(t), starting from some initial guess of the state, denoted by \hat{\mathbf{x}}(0)=\hat{\mathbf{x}}_{0}.

So how to achieve this goal? We achieve this goal by properly designing the observer gain matrix L. But, wait a second, what does it mean to properly design? We will answer these questions in the sequel.

Let us introduce the observer estimation error \mathbf{e}(t):

(10)   \begin{align*}\mathbf{e}(t)=\mathbf{x}(t)-\hat{\mathbf{x}}(t)\end{align*}

The main goal of the observer is achieved by designing the matrix L such that asymptotically, the observer estimation error (10) approaches zero. Let us explain how to do that. Let us differentiate the equation (10)

(11)   \begin{align*}\dot{\mathbf{e}}=\dot{\mathbf{x}}-\dot{\hat{\mathbf{x}}}\end{align*}

By substituting \dot{\mathbf{x}} from (8) and \dot{\hat{\mathbf{x}}} from (9) in (11), we obtain

(12)   \begin{align*}\dot{\mathbf{e}}& =A\mathbf{x}+B\mathbf{u} - \big( A\hat{\mathbf{x}}+B\mathbf{u}+L(\mathbf{y}-C\hat{\mathbf{x}}) \big) \\\dot{\mathbf{e}}& =A\big(\mathbf{x}-\hat{\mathbf{x}}\big) -L(\mathbf{y}-C\hat{\mathbf{x}}) \\\dot{\mathbf{e}}& =A\big(\mathbf{x}-\hat{\mathbf{x}}\big) -L(C\mathbf{x}-C\hat{\mathbf{x}})  \\\dot{\mathbf{e}}& =A\big(\mathbf{x}-\hat{\mathbf{x}}\big) -LC(\mathbf{x}-\hat{\mathbf{x}}) \\ \dot{\mathbf{e}}& =\big(A-LC\big) \big(\mathbf{x}-\hat{\mathbf{x}}\big)  \\\dot{\mathbf{e}}& =\big(A-LC\big)\mathbf{e}\end{align*}

The last equation is obtained by using (10) and by using the fact that \mathbf{y}=C\mathbf{x}. The last equation is very important, and let us repeat it for clarity

(13)   \begin{align*}\dot{\mathbf{e}}& =\big(A-LC\big)\mathbf{e}\end{align*}

Now, the observer design problem is to design the matrix L such that the system (13) is asymptotically stable. This is essentially the pole placement problem, that was analyzed and solved when we considered feedback control problems. That is, to design the observer gain matrix L, we will use the same pole placement method that is used for the control design. There is a complete mathematical equivalence or in control engineering terminology “duality” between observer design and feedback matrix design based on pole placement.

The goal is to design the matrix L such that the poles of the closed-loop matrix \big(A-LC\big) have negative real parts and they are placed at the desired locations specified by the control designer. We can use Ackermann’s formula to solve this problem. However, since this is an introductory tutorial, we will not explain Ackermann’s formula. Instead, we will use Python to solve the pole placement problem and the function place() implemented in Python’s Control Systems Library. You can also use MATLAB’s function place to assign the observer poles.

Before we present a Python script for designing and simulating the observer algorithm. We need to address the following question.

Where to place the observer poles?

There are two schools of thought and two different opinions related to this important question.

  1. The observer poles must be two to five times faster than the feedback control poles in order to ensure that the observer estimation error (10) converges to zero much quicker than the control inputs. That is, we want our observer to quickly obtain a good estimate of the true system state, such that this state can be used in the state feedback control algorithm. Since in this case, the feedback control poles are slower than the observer poles, the closed-loop dynamics is primarily determined by the closed-loop control poles.
  2. The issue with the first approach is that in the presence of considerable noise and process disturbances, the fast dynamics of the observer closed loop poles can amplify the noise and disturbances. You can see that from the observer equation (9). The output of the system \mathbf{y} influences the observer correction term, and consequently, the measurement noise that is present in \mathbf{y} will affect the observer behavior. In this case, it is a good idea to explore the possibility of placing the observer poles such that they are slower than the feedback control poles. By doing this, the response of the system will be dominated by the observer poles and we will decrease the bandwidth of the system, thus making the system less sensitive to noise. Another approach is to design a low pass or a notch filter that will filter the output of the system \mathbf{y}, and then, use that filtered output in the observer equation (9) instead of \mathbf{y}.

Observer Design By Using Pole Placement and Observer Design and Simulation in Python

Over here, we explain how to design the observer gain matrix L by using the pole placement method. We explain the main ideas on how to implement the solution in Python. The complete Python simulation codes are given here (the small fee is used to support this website).

Let \mathbf{p} be a vector whose elements are desired observer closed-loop poles. That is, this vector contains complex numbers that are desired poles of the observer closed-loop matrix \big(A-LC\big). For example, in our case, this set can be

(14)   \begin{align*}\mathbf{p}=\begin{bmatrix} -5+1j \\ -5-1j \\ -2 \\ -3 \end{bmatrix}\end{align*}

When defining the vector of desired pole locations, we need to keep in mind that if we specify a complex pole location, then we also need to have a conjugate pole that is symmetric with respect to the real axis.

To perform the pole placement, we use the Python Control Systems Library. In particular, we use the function place(). This function computes the matrix K, such that the closed-loop control matrix

(15)   \begin{align*}A-BK\end{align*}

has poles at prescribed locations. This function is used like this “place(A,B,p)”, where p is the list of desired poles (in our case the vector (14) ), and it returns the matrix K.

Now, how can we use this function to assign the poles of the closed-loop observer matrix \big(A-LC\big)? We cannot directly use this function since this matrix is not in the correct form necessary to use the function. However, we can still use this function with several modifications of the input arguments.

First of all, it is a very-well known fact that a square matrix and its transpose have the same set of eigenvalues. Consequently, the matrix

(16)   \begin{align*}A-LC\end{align*}

and its transpose given by

(17)   \begin{align*}\big( A-LC \big)^{T}=A^{T}-C^{T}L^{T}\end{align*}

have identical eigenvalues. This means that we can assign the closed loop poles of (17) and these closed-loop poles will exactly be the closed loop poles of (16). It should be overved that the matrix A^{T}-C^{T}L^{T} is in the structurally identical form to the matrix A-BK that is used by the function “place()”. Consequently, by typing “place(A^{T},C^{T},p)” in Python, we will compute the matrix L^{T}. Then, we will just take the transpose of the matrix L^{T} to obtain our matrix L. That is it. For more details see the YouTube video tutorial and the Python code files.

Next, we need to explain how to simulate the computed observer. Let us look again at the observer equation

(18)   \begin{align*}\dot{\hat{\mathbf{x}}} & =A\hat{\mathbf{x}}+B\mathbf{u}+L(\mathbf{y}-C\hat{\mathbf{x}}) \end{align*}

We can write this equation like this

(19)   \begin{align*}\dot{\hat{\mathbf{x}}} & =(A-LC)\hat{\mathbf{x}}+\begin{bmatrix} B & L \end{bmatrix} \begin{bmatrix}\mathbf{u} \\  \mathbf{y} \end{bmatrix} \end{align*}

This is a state equation, with

  • the state matrix (A-LC)
  • input matrix \begin{bmatrix} B & L \end{bmatrix}
  • the input vector

    (20)   \begin{align*}\begin{bmatrix}\mathbf{u} \\  \mathbf{y} \end{bmatrix} \end{align*}

By simulating the original system (3) for the known input \mathbf{u}, we can obtain the time sequences of \mathbf{x} and \mathbf{y}. Then, we can use the input \mathbf{u} and the output \mathbf{y} as inputs in (19) to simulate the state equation of the observer. This can be done in Python by using the Python Control System Library function “forced_response()” which is very similar to MATLAB’s function “lsim()”. For more details on how to simulate the observer in Python see the developed Python code files that come with this tutorial.

The system and observer simulations are given below. The first graph given below shows the step response of the system to the constant force equal to 10 [N]. We plot the position of the first object (state variable x_{1}), where we assumed an arbitrary initial condition for simulation.

Figure 3: Step response of the double mass-spring-damper system for constant input force equal to 10[N]. This response shows the position of the first object over time.

The figure below shows the convergence of the observer. We assumed an arbitrary initial condition of the observer. That is, we assumed an arbitrary initial guess of the state. The time behavior of the observer state variable \hat{x}_{1} and the true system state variable x_{1} (position of the first object) is shown in the figure below.

Figure 4: The time behavior of the observer state variable \hat{x}_{1} and the true system state variable x_{1}

We can observe that the state estimate \hat{x}_{1} asymptotically converges to x_{1}. Initially, we do not have any knowledge about the state, and consequently, the observer starts from a random guess of the state estimate. As time progresses, and more output measurements of the system are filtered by the observer, we get better and better state estimates. This means that the observer is properly designed. The figure below shows the state estimation error e_{1}=x_{1}- \hat{x}_{1} of the variable x_{1} as the function of time. We can observe that the state estimation error approaches zero. This also confirms the asymptotic stability of the observer.

Figure 5: The state estimation error e_{1}=x_{1}- \hat{x}_{1} of the variable x_{1} as the function of time.