Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://num-anal.srcc.msu.su/list_wrk/ps1/ch4st4.ps
Äàòà èçìåíåíèÿ: Tue Dec 17 13:00:58 2002
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 20:46:37 2012
Êîäèðîâêà:
Mathematical Modeling of Complex Information Processing Systems 139
STABILIZATION OF OSCILLATORY BILINEAR SYSTEMS
V. V. Alexandrov 1 and J. Torres Jacome 2
The problem of stabilization of oscillatory bilinear systems with n degrees of freedom is considered.
It is supposed that each degree of freedom corresponds to an oscillation frequency and that the
system is under the action of additive and parametric controls. It is shown that if all the frequencies
are unequal or only two of them coincide, then it is possible to stabilize the system; otherwise, the
system is not stabilizable.
1. The case of two degrees of freedom. Let us consider the controllable bilinear oscillatory system
with two degrees of freedom:
8 ? !
? :
¨
x i + (! 2
i
+ u 1 ) x i = u 2 ; i = 1; 2
u i (\Delta) 2 U i =
\Phi
u i (\Delta) 2 KC 1 j ju i (t) 6 š i ! 1
\Psi
! 2
i \Gamma š 1 ? 0
(1)
We shall use the following piecewise linear functions as controls:
u i = š i
g
sign s i =
8 !
:
š i
ffi
s i for js i j 6 ffi
š i sign s i for js i j ? ffi
Mathematical model (1) describes, for example, relative oscillations of two disconnected pendulums when
the suspension ``segment'' moves with the vertical u 1 and horizontal u 2 accelerations (! 2
i = g 0
l i
and ' i = x i
l i
are the deviations from the vertical position, l i is the length of the ith pendulum, and g 0 is the gravitational
acceleration).
It is required to determine controls u 1 and u 2 such that the trivial solution of system (1) is asymptotically
stable; in other words, it is necessary to stabilize system (1).
First we assume that ! 1 6= ! 2 . In this case, the additive control u 2 is sufficient for the stabilization.
Assuming u 1 j 0, therefore, we can rewrite (1) in matrix form:
¨
x
+\Omega x = 1u 2 (2)
Here\Omega =
/
! 2
1
0
0 ! 2
2
!
and 1 T = (1; 1).
System (2) is completely controllable [1]; hence, it can be stabilized. The simplest stabilization control has
the form
s 2
= \Gamma1 T —
x
Indeed, in this case the Lyapunov function v 1 = 1
2
\Gamma

x T —
x + x
T\Omega x
\Delta
is infinitely large and its derivative has
the form
dv 1
dt
= \Gamma š x
ffi
\Gamma 1 T —
x
\Delta 2
by virtue of system (2). The Barbashin--Krasovskii theorem is applicable, since ! 1 6= ! 2 ; hence, the trivial
solution is asymptotically stable.
Now we suppose that ! 1 = ! 2 = ! 0 . In this case, system (2) is not controllable; moreover, after decompo­
sition the uncontrollable subsystem takes the form ¨
¸ 1 + ! 2
0 ¸ 1 = 0, i.e., it is not stabilizable. Hence, we cannot
stabilize system (2). For the stabilization we should use the multiplicative (parametric) control u 1 . Since the
1 Faculty of Mechanics and Mathematics, Moscow State University, Moscow, 119899, Russian Federation,
e­mail: valex@moids.math.msu.ru
2 Benem'erita Universidad Aut'onoma de Puebla, Instituto de Ciencias Exadas, Apartado Postal 1152, Puebla,
M'exico, e­mail: tjacome@fcfm.buap.mx

140 Mathematical Modeling of Complex Information Processing Systems
two­dimensional bisector plane x 1 = x 2 , x 3 = x 4 (x 3 = —
x 1 , x 4 = —
x 2 ) can be regarded as a controllable subspace
for system (2), it is natural to consider the auxiliary system
¨
x 0 +
\Gamma
! 2
0
+ u 1 (\Delta)
\Delta
x 0 = 0 (3)
for the new coordinate x 0 = x 1 \Gamma x 2 and the extremal problem
x 1 (t 1 ) ! min
u1 (\Delta)
(4)
on minimization of the oscillation amplitude of this system for the limiting case ffi = 0. Here x 1 (0) = \Gamma1,

x 1 (0) = —
x 1 (t 1 ) = 0, —
x 1 (t 1 ) 6= 0 8t 2 (0; t 1 ).
Using the Pontrjagin maximum principle, we can prove that problem (3), (4) has the only solution with
the optimal control
u 0
1
= š 1 sign(x 0 —
x 0 )
If u 1
= š 1
g
sign(x 0

x 0
), then the asymptotic stability of auxiliary system (3) is easily proved, since we can
use the Barbashin--Krasovskii theorem for the Lyapunov function v 2 = 1
2
\Gamma
x 2
0
+ —
x 2
0
\Delta , which has the derivative
dv 2

= \Gamma š 1
! 0 ffi
x 2
0

x 2
0
6 0 by virtue of system (3); here Ü = ! 0 t.
Thus, we can consider the following stabilization law for system (1):
u 1 = š 1
g
sign
\Theta (x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 )
\Lambda
; u 2 = \Gammaš 2
g
sign( —
x 1 + —
x 2 )
On the segments of linear stabilization, closed system (1) takes the form
¨
x 1 +
\Theta
! 2
0
+ k 1 (x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 )
\Lambda
x 1 = \Gammak 2 ( —
x 1 + —
x 2 )
¨
x 2 +
\Theta
! 2
0
+ k 1 (x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 )
\Lambda
x 2 = \Gammak 2 ( —
x 1 + —
x 2 )
(5)
where k i = v i
ffi
s i , s 1 = (x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 ), s 2 = \Gamma —
x 1 \Gamma —
x 2 . The characteristic equation in variations written for
system (5) is as follows:
– 4 + 2k 2 – 3 + 2! 2
0 – 2 + 2k 2 ! 2
0 – + ! 4
0
= 0 (6)
Equation (6) has the two pure imaginary roots – 1;2 = \Sigmai! 0 and the two roots with negative real parts:
– 3;4 =
8 ? ? ? !
? ? ? :
\Gammak 2 \Sigma
q
k 2
2
\Gamma ! 2
0
if ! 0 ! k 2
\Gammak 2 if ! 0 = k 2
\Gammak 2 \Sigma i
q
! 2
0
\Gamma k 2
2
if ! 0 ? k 2
Therefore, in order to prove the asymptotic stability in this case, we should use the linear nondegenerate
transformation x = Sy, where
S = 1
2
0
B B B B @
1 0 1 0
\Gamma1 0 1 0
0 ! 0 0 1
0 \Gamma! 0 0 1
1
C C C C A
In these new coordinates, system (5) has the form
(

y 1 = ! 0 y 2

y 2 = \Gamma! 0 y 1 \Gamma k 1 y 2
1 y 2
(7)
(

y 3 = y 4

y 4 = \Gamma! 0 y 3 \Gamma 2k 2 y 4 \Gamma k 1 ! 0 y 1 y 2 y 3
(8)
Subsystem (7) coincides with the auxiliary system if u 1
= š 1
g
sign(x 0

x 0
); its trivial solution, therefore, is asymp­
totically stable. Subsystem (8) has the following matrix form:

y =
\Gamma A+ B(t)
\Delta y

Mathematical Modeling of Complex Information Processing Systems 141
Here A =
`
0 1
\Gamma! 2
0
\Gamma2k 2
'
is the Hurwitz matrix and B =
`
0 \Gamma0
\Gammak 1 ! 0 y 1 (t)y 2 (t) 0
'
. All the elements of the
matrix B tend to zero asymptotically for arbitrary initial conditions. Therefore, if we use the Bellman--Granwall
lemma, then we can show that all the solutions of subsystem (8) tend to zero asymptotically as well. Thus, the
trivial solution of system (5) is asymptotically stable, i.e., the problem on stabilization of original system (1) is
solved. If we have measuring instruments that register precise information on all the coordinates x 1 ; x 2 ; —
x 1 ; —
x 2 ,
then we can construct a closed system whose mathematical model is of the form
¨
x 1 +
\Theta
! 2
0
+ š 1
g
sign
\Gamma
(x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 )
\Delta\Lambda
x 1 = \Gammaš 2
g
sign( —
x 1 + —
x 2 )
¨
x 2 +
\Theta
! 2
0
+ š 1
g
sign
\Gamma
(x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 )
\Delta\Lambda
x 2 = \Gammaš 2
g
sign( —
x 1 + —
x 2 )
If we combine the cases ! 1 6= ! 2
and ! 1
= ! 2
, then we can obtain a complete solution to the problem on
stabilization of system (1).
2. The case of n degrees of freedom (n ? 2). Let us consider the controllable bilinear oscillatory
system with n degrees of freedom (n ? 2):
8 ? !
? :
¨ x i +
\Gamma
! 2
i + u 1
\Delta
x i = u 2 ; i = 1; : : : ; n
u j (\Delta) 2 U=
\Phi
u j (\Delta) 2 KC j ju j j 6 š j ! 1; j = 1; 2
\Psi
! 2
i \Gamma š 1 ? 0
(9)
Our aim is to formulate conditions under which the trivial solution of system (9) is asymptotically stable. As
before, the system is under the action of the parametric control u 1 and the additive control u 2 .
2.1. The case of unequal frequencies. Let us consider the case when the parametric control is absent
and all the frequencies are distinct. Then, the following lemma is valid:
Lemma [1]. The controllable system
8 ? ? ? ? ? ? ? ? !
? ? ? ? ? ? ? ? :
¨
x
+\Omega x = 1u 2
u 2 = K T —
x
\Omega =
0
B B B @
! 2
1
0 0 : : : 0
0 ! 2
2
0 : : : 0
. . .
. . .
. . .
. . .
. . .
0 0 0 : : : ! 2
n
1
C C C A ; 1 =
0
B B B @
1
1
. . .
1
1
C C C A
(10)
is stabilizable when ! 2
i 6= ! 2
j 8 i 6= j, i; j = 1; : : : ; n, i.e., there exists an (n \Theta 1)­matrix K such that the trivial
solution to the equation ¨
x
+\Omega x = K T —
x is asymptotically stable.
Since the controls under consideration are bounded, the next theorem formulates the condition when the
problem on stabilization of system (10) can be solved:
Theorem. If system (10) is subjected to the action of the control
u 2 = \Gammaš 2
g
sign
\Gamma
1 T —
x
\Delta
=
8 !
:
\Gamma š 2
Ÿ
1 T —
x if
fi fi 1 T —
x
fi fi 6 Ÿ
\Gammaš 2 sign
\Gamma
1 T —
x
\Delta
if
fi fi 1 T —
x
fi fi ? Ÿ
then its trivial solution is asymptotically stable.
2.2. The case when two frequencies coincide. Let us consider system (9) for the case when ! 2
1
=
! 2
2
= ! 2
0
6= ! 2
j , j = 3; 4; : : : ; n, and ! 2
i 6= ! 2
j 8 i 6= j, i; j = 3; : : : ; n.
As was shown above in the case n = 2, for the stabilization we should use a parametric control u 1 . In order
to apply this approach to system (9) for n ? 2, we rewrite (9) in the form
8 ? ? ? ? ? ? ? ? ? ? !
? ? ? ? ? ? ? ? ? ? :
¨ x 1 + (! 2
0
+ u 1 ) x 1 = u 2
¨ x 2 + (! 2
0
+ u 1 ) x 2 = u 2
¨ x 3 + (! 2
3
+ u 1 ) x 3 = u 2
. . .
¨ xn + (! 2
n
+ u 1
) xn = u 2
u i (\Delta) 2 U=
\Phi u i 2 KC j ju i (t)j 6 š i ! 1; i = 1; 2
\Psi

142 Mathematical Modeling of Complex Information Processing Systems
Introducing the new coordinate x 0 = x 1 \Gamma x 2 , we can decompose the above system into the two subsystems
(I)
\Phi ¨
x 0 + (! 2
0
+ u 1 ) x 0 = 0
(II)
8 ? ? ? ? ? !
? ? ? ? ? :
¨
x 2 + (! 2
0
+ u 1 ) x 2 = u 2
¨
x 3
+ (! 2
3
+ u 1
) x 3
= u 2
. . .
¨
xn + (! 2
n + u 1 ) xn = u 2
(11)
The following theorem holds:
Theorem. System (11) is locally stabilizable if the controls
u 1 = š 1
g
sign
\Gamma
(x 1 \Gamma x 2 )( —
x 1 \Gamma —
x 2 )
\Delta
u 2 = \Gammaš 2
g
sign ( —
x 2 + —
x 3 + \Delta \Delta \Delta + —
xn )
are used.
2.3. The case when three or more frequencies coincide. In order to prove that under this condition
the system is not stabilizable, we first consider the case when n = 3 and the parametric control is absent.
Denoting ! 2
1
= ! 2
2
= ! 2
3
= ! 2
0
, we rewrite the original system in the form
8 ? !
? :
¨
x 1 + ! 2
0 x 1 = u 2
¨
x 2 + ! 2
0 x 2 = u 2
¨
x 3 + ! 2
0 x 3 = u 2
(12)
This system can be written in matrix form as

x = d
dt
0
B B B B B B @
x 1
x 2
x 3

x 1

x 2

x 3
1
C C C C C C A
=
0
B B B B B B @
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
\Gammaw 2
0
0 0 0 0 0
0 \Gammaw 2
0
0 0 0 0
0 0 \Gammaw 2
0
0 0 0
1
C C C C C C A
0
B B B B B B @
x 1
x 2
x 3

x 1

x 2

x 3
1
C C C C C C A
+
0
B B B B B B @
0
0
0
1
1
1
1
C C C C C C A
u 2
The controllability matrix takes the form
C =
0
B B B B B B @
0 1 0 \Gammaw 2
0
0 \Gammaw 4
0
0 1 0 \Gammaw 2
0
0 \Gammaw 4
0
0 1 0 \Gammaw 2
0
0 \Gammaw 4
0
1 0 \Gammaw 2
0
0 \Gammaw 4
0
0
1 0 \Gammaw 2
0
0 \Gammaw 4
0
0
1 0 \Gammaw 2
0
0 \Gammaw 4
0
0
1
C C C C C C A
The first two columns of this matrix specify the controllable subspace for system (12). Hence, we can use the
change of coordinates
x =
0
B B B B B B @
1 0 0 0 1 0
0 0 1 0 1 0
0 0 0 0 1 0
0 1 0 0 0 1
0 0 0 1 0 1
0 0 0 0 0 1
1
C C C C C C A
0
B B B B B B @
y 1
y 2
y 3

y 1

y 2

y 3
1
C C C C C C A
to decompose system (12) into the following two subsystems:
(I)
8 ? ? ? ? !
? ? ? ? :

y 1 = y 2

y 2 = \Gamma! 2
0 y 2

y 3
= y 4

y 4 = \Gamma! 2
0 y 3
(II)
( —
y 5
= y 6

y 6 = \Gamma! 2
0
y 5 + u 2

Mathematical Modeling of Complex Information Processing Systems 143
It can be concluded from here that subsystem (II) is completely controllable, whereas subsystem (I) is not
controllable.
The same reasonings can be used in the case n ? 3 when at least three frequencies coincide. First, the
system is decomposed with a transformation similar to that considered above and, then, it is demonstrated that
no stabilization can be reached with the help of any parametric control.
Theorem. Let us consider the bilinear system with n degrees of freedom (n ? 3)
8 ? ? ? ? ? ? ? ? ? ? !
? ? ? ? ? ? ? ? ? ? :
¨
x 1 + (! 2
0
+ u 1 ) x 1 = u 2
¨
x 2
+ (! 2
0
+ u 1
) x 2
= u 2
¨
x 3 + (! 2
0
+ u 1 ) x 3 = u 2
¨
x 4 + (! 2
4
+ u 1 ) x 4 = u 2
. . .
¨
xn + (! 2
n + u 1 ) xn = u 2
and with three equal frequencies. Then the system is not stabilizable with any controls u 1 and u 2 .
Proof. By introducing the new coordinates z 1 = x 1 \Gamma x 3 and z 2 = x 2 \Gamma x 3 , the above system is decomposed
into the two subsystems:
(I)
( ¨ z 1 + (! 2
0
+ u 1 ) z 1 = 0
¨ z 2
+ (! 2
0
+ u 1
) z 2
= 0
(II)
8 ? ? ? ? ? !
? ? ? ? ? :
¨
x 3 + (! 2
0
+ u 1 ) x 3 = u 2
¨
x 4 + (! 2
4
+ u 1 ) x 4 = u 2
. . .
¨
xn + (! 2
n + u 1 ) xn = u 2
In the subspace for system (I), we define the function
'(t) = z 1 —
z 2 \Gamma z 2 —
z 1
The time derivative of this function is equal to zero:
d'(t)
dt
j 0
Hence, '(t) j const. Using the polar coordinate system, we can write
z 1 = R 1 (t) cos ` 1 (t); —
z 1 = R 1 (t) sin ` 1 (t)
z 2
= R 2
(t) cos ` 2
(t); —
z 2
= R 2
(t) sin ` 2
(t)
and
'(t) = R 1 (t)R 2 (t) sin
\Gamma
` 2 (t) \Gamma ` 1 (t)
\Delta
j const
Therefore, R 1 (t)R 2 (t) ? const for any u 1 . Hence, in the above subspace it is impossible to stabilize the system.
The theorem is proved.
REFERENCES
1. V.V. Alexandrov, S.I. Zlochevskii, S.S. Lemak, and N.A. Parusnikov, Introduction to Dynamics of Control
Systems (in Russian), Moscow, 1993.