The classical control theory expressed in frequency domain leads to a stable system and satisfies a set of more or less arbitrary requirements. Optimal control recognizes the random behavior of the system and attempts to optimize response or stability on an average rather than with assured precision. The optimal control theory provides a comprehensive, consistent, and flexible design approach. The classical response criteria such as step response are helpful in determining what values to use in quadratic cost function weighting matrices. These weighting factors have a powerful and direct effect on achieving desired response (Lewis and Syrmos 1995; Lee and Wu 1995).
Optimal controller design using full-state feedback control strategy
To design an optimal regulator, the modern control theory requires the development of dynamic system model in state variable form. The regulator design of higher-order nonlinear system model results in complex computations. Hence, the system equations are linearized about an operating point and then the linear state regulator theory is applied to obtain the desired control law. A linear time-invariant power system in state space is represented by following differential equations:
$$\dot{x}\left( t \right) = Ax\left( t \right) + Bu\left( t \right) + T{\text{d}}\left( t \right)$$
(13)
$$y\left( t \right) = Cx\left( t \right)$$
(14)
the control law is given by
for full-state vector feedback
for output feedback problem to minimize the performance index
$$J = \int_{0}^{\infty } {( x^{\text{T}} Q x + u^{\text{T}} R u) \,{\text{d}}t} .$$
(17)
Subjected to system dynamic constraints Eqs. (13) and (14), the augmented cost function for the performance index J is given by
$$J = \int_{0}^{\infty } {[(x^{\text{T}} Q x + u^{\text{T}} R u ) + \lambda^{\text{T}} (A x + B u {-} x )]\,{\text{d}}t} .$$
(18)
Defining Hamiltonian as
$$H = \frac{1}{2}\left( {x^{\text{T}} Qx + u^{\text{T}} Ru} \right) + \lambda^{\text{T}} \left( {Ax + Bu} \right).$$
(19)
Using linear state regulator approach, let u* be an admissible control that drives the system from an initial point x
0, where * indicates that u = u*. For u* to be optimal, the variables must satisfy the following relations:
$$x^{*} = \frac{\partial H}{\partial \lambda }\left( {x^{*} ,\,\lambda^{*} ,u^{*} } \right)$$
(20)
$$\lambda^{*} = - \frac{\partial H}{\partial x}\left( {x^{*} ,\,\lambda^{*} ,\,u^{*} } \right).$$
(21)
And the function H (x
*, λ
*, u
*) must be minimum. Hence we get
$$\frac{\partial H}{\partial u} = 0 = Ru + B^{\text{T}} \lambda$$
(22)
$$U = - R^{ - 1} B^{\text{T}} \lambda^{*}$$
(23)
From Eqs. (18) and (19), the differential equation obtained in x* and λ* must satisfy
$$\lambda^{*} = - A^{\text{T}} \lambda^{*} - Qx^{*}$$
(24)
$$x^{*} = Ax^{*} - BR^{ - 1} B^{\text{T}} \lambda^{*} .$$
(25)
Assuming
$$\lambda = \tilde{p}x$$
(26)
Substituting for the costate in terms of x in Eqs. (24) and (25), we get
$$px^{*} = A^{\text{T}} px^{*} - Qx^{*}$$
(27)
$$x^{*} = Ax^{*} - BR^{ - 1} B^{\text{T}} {\text{p}}x^{*}$$
(28)
Substituting x* in Eq. (25), the vector x gives the n × n matrix differential equation as
$$p + pA - pBR^{ - 1} B^{\text{T}} + A^{\text{T}} p + Q = 0.$$
(29)
This matrix differential equation is called the Riccati equation (Ibraheem and Kumar 2004). For full-state feedback control strategy with the consideration of matrix Q as identity matrix, Eq. (29) can be rewritten in the commonly used form as
$$PA + A^{\text{T}} P - PBR^{ - 1} B^{\text{T}} P + C^{\text{T}} C = 0.$$
(30)
The solution of the matrix Riccati equation gives the matrix P, from which the controller gain is obtained as
$$k = R^{ - 1} B^{\text{T}} P.$$
(31)
Thus, the closed-loop system is defined as
where
Suboptimal controller design using strip eigenvalue assignment method
The linear systems are influenced by the locations of eigenvalues. Therefore, for a system to get good response, both in transient and steady states, it is necessary to locate all eigenvalues in desired positions. Due to approximations, it is difficult to attain the exact locations of all eigenvalues. Hence it is sufficient that all eigenvalues are placed within a suitable region in complex s-plane, using strip eigenvalue assignment method.
The linear quadratic control is used to optimize the closed-loop system, such that the eigenvalues lie within a vertical strip in the complex s-plane (Sheih et al. 1986). The output feedback controller is preferred as compared to the state feedback controller; since it is not possible to measure all the states of the system. The output feedback control law is stated as
$$u\left( t \right) = - Gy\left( t \right) = - GCx\left( t \right)$$
(34)
In conventional optimal analysis, matrices Q and R are generally chosen as diagonal matrices. The system performance can be improved by shifting the eigenvalues Λ (A − BG) of the closed-loop system to a desired region. From this, the weighting matrix R is set as an identity matrix with weight states for all inputs, and Q matrix must be given. For the system to be relatively stable, h ≥ 0. Then, the closed-loop system matrix
$$A_{\text{C}} = A - BG\tilde{P}$$
(35)
has all its eigenvalues lying on the left side of the −h vertical line as shown in Fig. 2, where the matrix \(\tilde{P}\) is the solution of the following Riccati equation:
$$\left( {A + h_{1} I_{\text{n}} } \right)\tilde{P} + \tilde{P}\left( {A + h_{1} I_{\text{n}} } \right) - \tilde{P}BR^{ - 1} B^{\text{T}} \tilde{P} + Q = 0_{\text{n}}$$
(36)
The unstable eigenvalues of the closed-loop system (A + h
1 in) are shifted to their mirror image position with respect to the −h vertical line (Sheih et al. 1986; Furuya and Irisawa 1999; Lee and Wu 1995). Assume two positive real values h
1 and h
2 to define an open vertical strip of (−h
1, −h
2) on the negative real axis as shown in Fig. 3, with \(\hat{A} = A + h_{1} I_{\text{n}}\). The control law is changed to be
$$u\left( t \right) = - Gy\left( t \right) = - G{\text{C}}x\left( t \right) = - \mu \tilde{F}x\left( t \right)$$
(37)
$$G = - \mu \tilde{F}C^{ + }$$
(38)
where C
+ is the pseudo-inverse of C.
$$\begin{aligned} \mu &= \frac{1}{2} + \frac{{\left( {h_{2} - h_{1} } \right)}}{{2\;{\text{tr}} ( {\hat{A}^{ + } } )}} \nonumber \\&= \frac{1}{2} + \frac{{\left( {h_{2} - h_{1} } \right)}}{{{\text{tr}} ( {B\tilde{F}} )}}. \end{aligned}$$
(39)
And
$$\tilde{F} = R^{ - 1} B^{\text{T}} \tilde{P}.$$
(40)
Thus, the resulting optimal closed-loop system becomes
$$\dot{x}\left( t \right) = \left( {A - BGC} \right)\;x\left( t \right).$$
(41)
Optimal controller design using pole placement technique
Most of the conventional design approaches specify only dominant closed-loop poles, while the pole placement design approach specifies all closed-loop poles. The pole placement technique places the poles at any desired locations by means of an appropriate state feedback gain matrix. The MATLAB software is used for placing the poles at desired locations.