Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://old.philol.msu.ru/~lex/khmelev/local/Report445.ps
Äàòà èçìåíåíèÿ: Thu Oct 17 00:00:00 2002
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 21:45:19 2012
Êîäèðîâêà:
Mean­field approximation for stochastic
transportation network and stability of
dynamical system
D.V. Khmelev, V.I. Oseledets.
Faculty of Mechanics and Mathematics, Chair of Probability,
Moscow State University, RUSSIA
E­mail: oseled@mech.math.msu.su
dima@vvv.srcc.msu.su
Abstract
A queuing system is considered, with a virtual node, N nodes, and rN servers. At each
node (of these N nodes) the arrivals of particles form a Poisson flow of rate –(t). For an
empty node a particle leaves the system. A server at the node takes the particle and moves
to a random node. Travelling time is exponential of mean 1. The number of servers at each
node (of these N nodes) is bounded by m.
We discuss the property of stability for the limiting deterministic process as N ! 1.
Furthermore, we consider a particular queuing system with dynamical routing.
1 Introduction
Consider a network, consisting of N nodes, a virtual node and rN servers. At each node (of these
N nodes) the arrivals of particles form a Poisson flow of rate –(t). If a particle arrives at an
empty node then the particle leaves the system. Otherwise, if there is a server at the node then
the server takes the particle and jumps to the virtual node. At this node the server waits for an
exponential time of mean 1. After the server jumps to a random node with uniform distribution. If
the number of servers at the chosen node equals m then the server waits for the following attempt
at the virtual node. The non­negative number of servers at each node (except virtual) is bounded
by m. Consider the fractions f k = n k =N , V = W=N , where n k is the (random) number of nodes
with k servers and W is the number of servers at the virtual node. It is more convenient to regard
the tail probabilities u k =
P m
i=k f i .
The state space of the corresponding Markov process UN (t) = (u k (t), V (t)) is the set XN of
all vectors u = (u 1 , : : : , um , V ) T in (1=N)Z m+1
+ such that
1 = u 0 – u 1 – : : : – um – 0; V – 0; u 1 + : : : + um + V = r: (1:1)
The generator of UN (t) is the operator AN (t) acting on functions and given by
AN (t)f(u) = N–(t)
m\Gamma1 X
k=1
(u k \Gamma u k+1 )[f(u \Gamma e k
N + e m+1
N ) \Gamma f(u)]+
+N–(t)um [f(u \Gamma e m
N
+ e m+1
N
) \Gamma f(u)]+ (1:2)
1

+NV
m
X
k=1
(u k\Gamma1 \Gamma u k )[f(u + e k
N
\Gamma e m+1
N
) \Gamma f(u)];
where e k denotes a vector with the component of number k equal to 1 and others equal to 0.
The mean­field approximation suggests that the whole process UN (t) is asymptotically deter­
ministic as N ! 1. More precisely, let X denote the set of all R m+1 vectors defined by (1.1).
Then, if the distribution of the initial state UN (0) converges to the Dirac delta­measure concen­
trated at some point g 2 X, the distribution of UN (t) is concentrated on the orbit u(t) 2 X as
N ! 1 where u(t) is the solution of the following system of differential equations (mean­field
equations)

u = f(u(t); –(t)); (1:3)
where
f i (u; –) = –(u i+1 \Gamma u i ) + V (u i\Gamma1 \Gamma u i ); i = 1; : : : ; m \Gamma 1; u 0 = 1;
f m (u; –) = \Gamma–u m + V (u m\Gamma1 \Gamma um );
f m+1 (u; –) = –u 1 \Gamma V (1 \Gamma um ):
(1:3 0 )
System (1.3) is non­linear and its right­hand side depends on time. We study system (1.3) with
the help of theorem 3.3. Perhaps theorem 3.3 is the most marvelous result of the present paper.
This theorem is of great practical and theoretical interest.
2 Results
By definition, put kuk = ju 1 j + : : : + ju m j + jV j. By –(t) denote a function of variable t 2 R such
that
inf
t–Ü
–(t) = ¯ – ? 0; sup
t–Ü
–(t) ! 1: (2:1)
Theorem 2.1. Suppose the function –(t) is piecewise continuous; then
(a) For any g 2 X and Ü 2 R there exists a unique solution u(t, Ü , g) of the Cauchy problem
(1.3) such that u(Ü , Ü , g) = g. For any t – Ü the solution u(t; Ü; g) 2 X.
(b) There exists a positive number fl such that 8g, g 0 2 X, 8t – Ü
ku(t; Ü; g) \Gamma u(t; Ü; g 0 )k Ÿ const \Delta exp (\Gammafl(t \Gamma Ü)):
(c) Now suppose that –(t) is T ­periodic. Then, there exists a unique point g \Lambda 2 X such that
u(t, Ü , g \Lambda ) is a T ­periodic solution and 8g 2 X
ku(t; Ü; g) \Gamma u(t; Ü; g \Lambda )k Ÿ const \Delta exp (\Gammafl(t \Gamma Ü));
(d) Further assume that the function –(t) is constant. Now there exists a unique point g \Lambda 2 X
such that u(t, Ü , g \Lambda ) = g \Lambda and 8g 2 X
jju(t; Ü; g) \Gamma g \Lambda jj Ÿ const \Delta exp (\Gammafl(t \Gamma Ü)):
The point g \Lambda of the assertion (d) is easily computed (see [1]). Let TN (t; Ü) be a family of
operators such that
TN (t; Ü)f(g) = E
\Gamma
f(UN (t)) j UN (Ü) = g
\Delta
; g 2 XN : (2:1)
Theorem 2.2. Suppose the function –(t) is piecewise constant; then 8f 2 C(X)
lim
N!1
sup
g2XN
jT N (t; Ü)f(g) \Gamma f(u(t; Ü; g))j = 0: (2:2)
The last limit is uniform in t 2 I where the interval I ae ft – Üg is bounded.
2

Let '' g be a measure on X such that '' g (fgg) = 1 and '' g (X n fgg) = 0.
Theorem 2.3. Suppose the function –(t) is piecewise constant and UN (Ü) ! '' g weakly; then
8t sup
0ŸsŸt
kUN (t) \Gamma u(t; Ü; g)k ! 0 in probability.
Theorem 2.4. Suppose the function –(t) is constant. Let ¯N be a unique stationary distribution
of Markov process UN ; then ¯N ! '' g \Lambda weakly.
3 Auxiliary statements
3.1 Coefficient of ergodicity
Consider a linear space R n supplied with the norm kxk = jx 1 j + ::: + jx n j where x 2 R n . Let fe k g
be the set of n unit vectors. The space of linear maps A : R n ! R n is supplied with the following
norm
kAk = sup
x2R n ;kxk=1
kAxk = sup
i
kAe i k:
By definition, put R+ = fx 2 R j x – 0g. A linear map A : R n ! R n is Markov if AR n
+ ` R n
+ and
(1, : : : , 1)A = (1, : : : , 1). Consider the linear subspace L = fx 2 R n j x 1 + : : : + x n = 0g. If the
linear map A is Markov, then the following norm is well defined.
kAk L = sup
x2L;kxk=1
kAxk = max
i;j
kA(e i \Gamma e j )k=2: (3:1)
The number k(A) = kAk \Gamma kAk L is called the coefficient of ergodicity. If B = bA, b – 0, and
the map A is Markov, then k(B) = b k(A).
A linear map A is non­negative (positive) if AR n
+ ` R n
+ (AR n
+ ae Int(R n
+ )). We say that A – B
if A \Gamma B is non­negative (A ? B if A \Gamma B is positive).
The following two lemmas are needed for the sequel.
Lemma 3.1. If A – B – 0 and C – D – 0 then AC – BD.
Summing inequalities (A \Gamma B)C – 0 and B(C \Gamma D) – 0, we get the proof. The application of
Lemma 3.1 and induction yields the following remark.
Remark. A n : : : A 1 – B n if A 1 – B – 0, : : : , A n – B – 0.
Lemma 3.2. (i) If A ? B – 0, then k(A) ? k(B). (ii) If A – B – 0, then k(A) – k(B).
Proof. Let us consider the case (i). Here a ? b where a = kAk and b = kBk. We have
kAk L Ÿ kA \Gamma Bk L + kBk L :
Using (3.1), we get
kA \Gamma Bk L ! kA \Gamma Bk = a \Gamma b = kAk \Gamma kBk:
Finally, we obtain
kAk L ! kAk \Gamma kBk + kBk L
whence k(A) ? k(B). The case (ii) is analogous.
3

3.2 Stability for some dynamical system
Consider a system of differential equations

x = f(x; z(t)); x 2 R n ; z(t)
2\Omega : (3:2)
Let x(t, t 0 , g) be a unique solution of system (3.2) such that x(t 0 , t 0 , g) = g. Let J(x, z) be the
Jacoby matrix of the vector field f(x, z)
J(x; z) = @f=@x = (@f i =@x j ):
By definition, put
ff(z) = \Gamma min
i
min
x2X
J ii (x; z); X ae R n : (3:3)
Suppose system (3.2) satisfies the following assumption. The function
P n
i=1 x i is the first integral
of system (3.2), i.e.,
P n
i=1 f i (x; z) = 0. It follows that J(x, z)L ae L. Further assume that the
map exp(tJ) is Markov when t – 0, x 2 X, and z 2 \Omega\Gamma
Suppose X is a convex compact subset in L and X is invariant with respect to dynamical
system (3.2). Now suppose there exists a matrix B – 0 such that the following conditions hold:
(a) 8z 2
\Omega\Gamma x 2 X J(x; z) + ff(z)I + i – B, where I is the identity matrix and i – 0 is
constant;
(b) there exists a number n 0 2 N such that k((I +B) n0 ) ? 0.
Theorem 3.3. The map x(t 0 + Ü , t 0 , \Delta) : X ! X is contracting when t 0 2 R and Ü ? 0 . If
sup
t 0
exp
`Z t 0 +Ü
t 0
ff(z(t))dt
'
= C(Ü) ! 1;
then the contraction coefficient is bounded by q ! 1 uniformly with respect to t 0 .
Proof. Let \Phi(t 0 + Ü , t 0 , g) = @x(t 0 + Ü , t 0 , g)=@g be the Jacoby matrix of the map x(t 0 + Ü , t 0 ,
\Delta) : X ! X. The map \Phi is Markov. Consider
\Psi(Ü ) = \Phi exp
`Z t 0 +Ü
t 0
ff(z(t))dt
'
and
A(Ü) = J
\Gamma
x(t 0 + Ü; t 0 ; g); z(t)
\Delta
+ ff(z(t))I:
Using the inequality A(Ü) – B, the equation —
\Psi = A(Ü )\Psi(Ü ), \Psi(0) = I, and lemma 3.1, we get
the following estimates
\Psi(Ü ) – exp(ÜB) – Ü n 0 exp(\GammaÜ )
(n 0 )! (I +B) n 0 :
Taking into account lemma 3.2, we obtain
k(\Psi(Ü )) – Ü n 0 exp(\GammaÜ )
(n 0 )! k((I +B) n0 ) ? 0:
Since
k(\Phi) = k(\Psi) exp
`
\Gamma
Z t 0 +Ü
t 0
ff(z(t))dt
'
;
and the map \Phi is Markov, we see that sup g2X jj\Phijj L ! 1. It can be proved that under the condition
C(Ü) ! 1 we have a uniform estimate.
4

Finally, by definition, put
fl(s) = (1 \Gamma s)g 1 + sg 2 ; 0 Ÿ s Ÿ 1
where g 1 , g 2 2 X. Now note that fl 0 (s) = g 2 \Gamma g 1 2 L and
kx(t 0 + Ü; t 0 ; g 2 ) \Gamma x(t 0 + Ü; t 0 ; g 1 )k Ÿ
Z 1
0
k\Phi
\Gamma
t 0 + Ü; t 0 ; fl(s)
\Delta
fl 0 (s)kds
Ÿ sup
g2X
k\Phik L
Z 1
0
kfl 0 (s)kds = sup
g2X
k\Phik L kg 2 \Gamma g 1 k:
This concludes the proof.
4 Proofs of theorems 2.1--2.4
Now we prove the theorem 2.1.
Proof of theorem 2.1. Consider the Jacoby matrix of vector field (1.3)
J(u; –) =
0
B B B B B B B B B @
\Gammafi – 0 : : : 0 0 (1 \Gamma u 1 )
V \Gammafi – : : : 0 0 (u 1 \Gamma u 2 )
0 V \Gammafi : : : 0 0 (u 2 \Gamma u 3 )
. . . . . . . . . . . . . . . . . . . . .
0 0 0 : : : \Gammafi – (u m\Gamma2 \Gamma um\Gamma1 )
0 0 0 : : : V \Gammafi (u m\Gamma1 \Gamma um )
– 0 0 : : : 0 V \Gamma(1 \Gamma um )
1
C C C C C C C C C A
where fi = – + V .
Let us show that the conditions of theorem 3.3 hold. First note that the set X is the convex
compact subset in R m+1 .
Now note that the map exp(tJ) is Markov. We get
ff(–) = \Gamma min
i
min
x2X
J ii (x; –) = max(– + r; 1 + r=m):
Further note that J(x; –(t)) + ff(–(t))I – B where
B =
0
B B B B B B B B B @
0 ¯ – 0 : : : 0 0 0
0 0 ¯
– : : : 0 0 0
0 0 0 : : : 0 0 0
. . . . . . . . . . . . . . . . . . . . .
0 0 0 : : : 0 ¯ – 0
0 0 0 : : : 0 0 0
¯ – 0 0 : : : 0 0 ¯ –
1
C C C C C C C C C A
and ¯
– is given by (2.1). The matrix (I +B) m has the following structure
(I +B) m =
0
B B B B B B B B B @
\Lambda \Lambda \Lambda : : : \Lambda \Lambda 0
0 \Lambda \Lambda : : : \Lambda \Lambda 0
0 0 \Lambda : : : \Lambda \Lambda 0
. . . . . . . . . . . . . . . . . . . . .
0 0 0 : : : \Lambda \Lambda 0
0 0 0 : : : 0 \Lambda 0
\Lambda \Lambda \Lambda : : : \Lambda \Lambda \Lambda
1
C C C C C C C C C A
5

where any \Lambda denotes a positive number. It follows that k((I +B) m ) ? 0.
Since
P m
i=1 u i +V is the first integral of system (1.3), u(t, Ü , g) moves along the plane
P m
i=1 u i +
V = r.
Lemma 4.1. The set Int X is invariant with respect to (1.3).
Proof. Assume the converse. Then u(Ü , Ü , g) = (u 1 (Ü ), : : : , um (Ü ), V (Ü)) satisfies the following
inequalities
1 = u 0 ? u 1 ? u 2 ? : : : ? um ? 0, V ? 0 (4:1)
when g 2 Int X and t 2 [Ü , t 0 ), but at the moment t 0 some inequalities in (4.1) change to equalities.
If V (t 0 ) = 0, then dV
dt
fi fi fi fi t=t0 \Gamma0
= –(t 0 \Gamma 0)u 1 (t 0 ) – –(t 0 \Gamma 0)r=m ? 0 and we arrive at contradic­
tion with V (t 0 ) \Gamma V (t) ! 0 for t ! t 0 . Therefore, we have V (t 0 ) ? 0.
Using the virtual variable um+1 j 0, we change f m in (1.3') to
f m (u; –) = –(um+1 \Gamma um ) + V (u m\Gamma1 \Gamma um ):
Since 1 = u 0 ? um+1 = 0, we see that the following two cases are only possible. 1) There exists i
such that u i\Gamma1 (t 0 ) = u i (t 0 ) ? u i+1 (t 0 ), 2) There exists j such that u j \Gamma1 (t 0 ) ? u j (t 0 ) = u j+1 (t 0 ).
Let us consider the case 1). Here du i\Gamma1
dt
fi fi fi fi t=t 0 \Gamma0
= V (t 0 )(u i\Gamma2 (t 0 )\Gammau i\Gamma1 (t 0 )) – 0 and du i
dt
fi fi fi fi t=t 0 \Gamma0
=
–(t 0 \Gamma 0)(u i+1 (t 0 ) \Gamma u i (t 0 )) ! 0. We have d(u i\Gamma1 \Gamma u i )
dt
fi fi fi fi t=t 0 \Gamma0
? 0 and we arrive at contradiction
with [u i\Gamma1 (t 0 ) \Gamma u i (t 0 )] \Gamma [u i\Gamma1 (t) \Gamma u i (t)] ! 0 when t ! t 0 . The case 2) is analogous. This concludes
the proof. The same idea is used in [2, Lemma 1].
We claim that X is invariant with respect to (1.3). Indeed, this follows from lemma 4.1 and
continuity of the map u(t; Ü; \Delta).
Finally note that the application of theorem 3.3 yields the conclusions of theorem 2.1.
Proof of theorem 2.2. Without loss of generality it can be assumed that – = const. Therefore
TN (t; Ü) = TN (t \Gamma Ü ). Now the theorem can be proved by the standard methods used in [2].
The set C(X) of all continuous functions over X is a Banach space with respect to the norm
kfk = max x2X jf j. Let T (t) by a semigroup over C(X) such that
T (t)f(g) = f(u(t; 0; g)); (4:2)
where f 2 C(X). The semigroups T (t) and TN (t) are strongly continuous. Let A (AN ) be the
generator of the semigroup T (t) (T N (t)). By D(A) denote the domain of A. From (4.2) it follows
that f 2 D(A) if f 2 C(X) and
@f=@u 1 ; : : : ; @f=@um ; @f=@V; @ 2 f=@ 2 u 1 ; : : : ; @ 2 f=@ 2 um ; @ 2 f=@ 2 V 2 C(X):
By D denote the set of all functions f satisfying the last conditions. It is easy to prove that the
set D is the core of the operator A. It can be proved (see [2]) that for all f 2 D
lim
N!1
sup
x2XN
jAN f(x) \Gamma Af(x)j = 0:
Using the MPs convergence theorems (see, e.g, [4, chapter 1, Theorem 6.1]) we get (2.2).
Proof of theorem 2.3. The theorem follows from theorem 2.2 and e.g. [4, chapter 4, theorem
2.11].
Proof of theorem 2.4. Because of assertion (d) of theorem 2.1 there exists a unique distribution
F over X such that d is invariant with respect to dynamical system (1.3). Therefore F = '' g \Lambda .
Theorem 2.2 implies that the invariant distributions ¯N converge to '' g \Lambda . The same idea was used
in [2], theorem 5.
6

5 Remark about model from [2]
The model considered in [2] is a queuing system SN , with N identical infinite­buffer FCFS single
servers, fed by a Poisson arrival flow of rate N–(t) and with i.i.d exponential service times of mean
1. Upon its arrival each task chooses two servers at random (with probability 1=N 2 ) and then
selects, among the chosen ones, the server with lowest number of tasks in the buffer (including the
task in service). If there happens to be more than one server with lowest number queue­size, the
task selects one of them randomly. SN is the system of type M jM jNj1. We consider a system
S m
N of type M jM jN jm. If the task selects a server with m tasks in the queue then the task leaves
the system.
Mean field equations for S m
N have the following form

u k = (u k+1 \Gamma u k ) + –(u 2
k\Gamma1 \Gamma u 2
k ); k = 1; : : : ; m \Gamma 1;

um = \Gammau m + –(u 2
m\Gamma1 \Gamma u 2
m ); (5:1)
where 1 – u 1 – : : : – um – 0. By definition, put

V = u 1 + –u 2
m \Gamma –; V – 0; V (0) = m \Gamma
m
X
i=1
u i (0): (5:1 0 )
Let x 2 R m+1 be the vector
(x 1 ; : : : ; xm+1 ) = (u 1 ; : : : ; um ; V ):
The following system corresponds to system (5.1), (5.1')

x = f(x; –) (5:2)
Here by definition, put
X = fx 2 R m+1 j1 – x 1 – : : : – xm – 0; xm+1 +
m
X
i=1
x i = m; xm+1 – 0g:
?From [2, Lemma 1] it follows that the set X is invariant with respect to ?system (5.2). By direct
calculation, we get
J(x; –) =
0
B B B B B B B B B @
\Gammafi 1 1 0 : : : 0 0 0
2–u 1 \Gammafi 2 1 : : : 0 0 0
0 2–u 2 \Gammafi 3 : : : 0 0 0
. . . . . . . . . . . . . . . . . . . . .
0 0 0 : : : \Gammafi m\Gamma1 1 0
0 0 0 : : : 2–um\Gamma1 \Gammafi m 0
1 0 0 : : : 0 2–um 0
1
C C C C C C C C C A
where fi i = 1 + 2–u i . From (3.3) it follows that ff(–) = 1 + 2–. Clearly, J(x, –) + ff(–)I – B
where
B =
0
B B B B B B B B B @
0 1 0 : : : 0 0 0
0 0 1 : : : 0 0 0
0 0 0 : : : 0 0 0
. . . . . . . . . . . . . . . . . . . . .
0 0 0 : : : 0 1 0
0 0 0 : : : 0 0 0
1 0 0 : : : 0 0 1
1
C C C C C C C C C A
:
7

The matrix (I +B) m has the following structure
(I +B) m =
0
B B B B B B B B B @
\Lambda \Lambda \Lambda : : : \Lambda \Lambda 0
0 \Lambda \Lambda : : : \Lambda \Lambda 0
0 0 \Lambda : : : \Lambda \Lambda 0
. . . . . . . . . . . . . . . . . . . . .
0 0 0 : : : \Lambda \Lambda 0
0 0 0 : : : 0 \Lambda 0
\Lambda \Lambda \Lambda : : : \Lambda \Lambda \Lambda
1
C C C C C C C C C A
where any \Lambda denotes a positive number. It follows that k((I +B) m ) ? 0.
Using theorem 3.3, we obtain
Theorem 5.1. For any – ? 0 there exists a unique exponentially stable stationary solution of
system (5.2).
Consider the system of differential equations

x = f(x; –(t)); (5:3)
where f(x; –) is defined by (5.2) and the piecewise continuous function –(t) is T­periodic. Similarly,
using theorem 3.3, we get
Theorem 5.2. Suppose inf –(t) ? 0 and sup –(t) ! 1; then there exists a unique exponentially
stable T ­periodic solution of system (5.3).
Let us remark that though the method of paper [2] yields the proof of the global stability of
system (5.2), it cannot prove the convergence of the solution of system (5.3) to a T ­periodic orbit.
Acknowledgment
The authors are grateful to the Professor L.G. Afanassieva for support and stimulating discussions.
The first author was partially supported by ISSEP grant s98­2042. The work of the second
author is supported by the Russian RFFI grant N 960100377.
References
[1] L.G. Afanassieva, G. Fyolle, S. Yu. Popov. Models for transportation networks. Journal of
Mathematical Science, 84:1092--1103, 1997.''
[2] N. D. Vvedenskaya, R.L. Dobrushin and F. I. Karpelevich. A queuing system with selection of
the shortest of two queues: an asymptotical approach. Problems Inform, Transmission, 32:15­27,
1996.''
[3] N. D. Vvedenskaya and Yu. M. Suhov. Dobrushin's mean­field approximation for a queue with
dynamical routing. Markov processes and related topics, 3:493--526, 1997.''
[4] S. N. Either, S., T. G. Kurtz. Markov processes characterization and convergence. John Willey
and Sons, New York, 1986.
8