Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://www.intsys.msu.ru/staff/mironov/theory_of_processes.pdf
Äàòà èçìåíåíèÿ: Thu Sep 23 17:22:40 2010
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 22:45:07 2012
Êîäèðîâêà:
Theory of Processes
A.M.Mironov


Contents
1 Intro duction 7 1.1 A sub ject of theory of processes . . . . . . . . . . . . . . . . . 7 1.2 Verification of processes . . . . . . . . . . . . . . . . . . . . . 9 1.3 Specification of processes . . . . . . . . . . . . . . . . . . . . . 10 2 The concept of a pro cess 2.1 Representation of behavior of dynamic systems processes . . . . . . . . . . . . . . . . . . . . . 2.2 Informal concept of a process and examples of 2.2.1 Informal concept of a process . . . . . 2.2.2 An example of a process . . . . . . . . 2.2.3 Another example of a process . . . . . 2.3 Actions . . . . . . . . . . . . . . . . . . . . . 2.4 Definition of a process . . . . . . . . . . . . . 2.5 A concept of a trace . . . . . . . . . . . . . . 2.6 Reachable and unreachable states . . . . . . . 2.7 Replacement of states . . . . . . . . . . . . . . 3 Op 3.1 3.2 3.3 3.4 3.5 3.6 3.7 erations on pro cesses Prefix action . . . . . . Empty process . . . . . Alternative composition Parallel composition . Restriction . . . . . . . Renaming . . . . . . . Properties of operations 12 in the form ....... processes . . ....... ....... ....... ....... ....... ....... ....... ....... of . . . . . . . . . . . . . . . . . . . . 12 13 13 14 15 16 19 20 21 21 23 23 24 25 31 46 48 49

.... .... ... .... .... .... on pro

.... .... .... .... .... .... cesses

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

1


4 Equivalences of pro cesses 4.1 A concept of an equivalence of processes . . . . . . . . . . . . 4.2 Trace equivalence of processes . . . . . . . . . . . . . . . . . . 4.3 Strong equivalence . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Criteria of strong equivalence . . . . . . . . . . . . . . . . . . 4.4.1 A logical criterion of strong equivalence . . . . . . . . . 4.4.2 A criterion of strong equivalence, based on the notion of a bisimulation . . . . . . . . . . . . . . . . . . . . . 4.5 Algebraic properties of strong equivalence . . . . . . . . . . . 4.6 Recognition of strong equivalence . . . . . . . . . . . . . . . . 4.6.1 Relation µ(P1 , P2 ) . . . . . . . . . . . . . . . . . . . . . 4.6.2 A polynomial algorithm for recognizing of strong equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Minimization of processes . . . . . . . . . . . . . . . . . . . . 4.7.1 Properties of relations of the form µ(P, P ) . . . . . . . 4.7.2 Minimal processes with respect to . . . . . . . . . . 4.7.3 An algorithm for minimizing of finite processes . . . . . 4.8 Observational equivalence . . . . . . . . . . . . . . . . . . . . 4.8.1 Definition of observational equivalence . . . . . . . . . 4.8.2 Logical criterion of observational equivalence . . . . . . 4.8.3 A criterion of observational equivalence based on the concept of an observational BS . . . . . . . . . . . . . 4.8.4 Algebraic properties of observational equivalence . . . . 4.8.5 Recognition of observational equivalence and minimization of processes with respect to . . . . . . . . . . . 4.8.6 Other criteria of equivalence of processes . . . . . . . . 4.9 Observational congruence . . . . . . . . . . . . . . . . . . . . 4.9.1 A motivation of the concept of observational congruence 4.9.2 Definition of a concept of observational congruence . . 4.9.3 Logical criterion of observational congruence . . . . . . 4.9.4 Criterion of observational congruence based on the concept of observational BS . . . . . . . . . . . . . . . . . 4.9.5 Algebraic properties of observational congruence . . . . 4.9.6 Recognition of observational congruence . . . . . . . . 4.9.7 Minimization of processes with respect to observational congruence . . . . . . . . . . . . . . . . . . . . . . . .

56 56 57 60 63 63 65 66 74 74 76 80 80 83 85 87 87 91 93 95 96 97 98 98 101 102 103 104 115 115

2


5 Recursive definitions of pro cesses 5.1 Process expressions . . . . . . . . . . . . . . . 5.2 A notion of a recursive definition of processes 5.3 Embedding of processes . . . . . . . . . . . . 5.4 A limit of a sequence of embedded processes . 5.5 Processes defined by process expressions . . . 5.6 Equivalence of RDs . . . . . . . . . . . . . . . 5.7 Transitions on P E . . . . . . . . . . . . . . . 5.8 A method of a proof of equivalence of processes 5.9 Problems related to RDs . . . . . . . . . . . . 6 Examples of a pro of 6.1 Flow graphs . . . 6.2 Jobshop . . . . . 6.3 Dispatcher . . . . 6.4 Scheduler . . . . 6.5 Semaphore . . . . of .. .. .. .. .. prop ... ... ... ... ... erties ... ... ... ... ... of . . . . . pro cesses ...... ...... ...... ...... ......

... ... ... ... ... ... ... with ...

.. .. .. .. .. .. .. use ..

.. .. .. .. .. .. .. of ..

116 . . 116 . . 117 . . 118 . . 120 . . 122 . . 124 . . 125 RDs 127 . . 128 129 129 130 134 139 149 152 152 154 154 154 156 157 158 158 159 160 161 163 163 167 169 171 172 173 174

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

7 Pro cesses with a message passing 7.1 Actions with a message passing . . . . . . . . . . . . . 7.2 Auxiliary concepts . . . . . . . . . . . . . . . . . . . . 7.2.1 Types, variables, values and constants . . . . . 7.2.2 Functional symbols . . . . . . . . . . . . . . . . 7.2.3 Expressions . . . . . . . . . . . . . . . . . . . . 7.3 A concept of a process with a message passing . . . . . 7.3.1 A set of variables of a process . . . . . . . . . . 7.3.2 An initial condition . . . . . . . . . . . . . . . . 7.3.3 Operators . . . . . . . . . . . . . . . . . . . . . 7.3.4 Definition of a process . . . . . . . . . . . . . . 7.3.5 An execution of a process . . . . . . . . . . . . 7.4 Representation of processes by flowcharts . . . . . . . . 7.4.1 The notion of a flowchart . . . . . . . . . . . . . 7.4.2 An execution of a flowchart . . . . . . . . . . . 7.4.3 Construction of a process defined by a flowchart 7.5 An example of a process with a message passing . . . . 7.5.1 The concept of a buffer . . . . . . . . . . . . . . 7.5.2 Representation of a buffer by a flowchart . . . . 7.5.3 Representation of a buffer as a process . . . . . 3

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .


7.6

7.7

7.8

7.9

Operations on processes with a message passing . . . . . . . 7.6.1 Prefix action . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Alternative composition . . . . . . . . . . . . . . . . 7.6.3 Parallel composition . . . . . . . . . . . . . . . . . . 7.6.4 Restriction and renaming . . . . . . . . . . . . . . . . Equivalence of processes . . . . . . . . . . . . . . . . . . . . 7.7.1 The concept of a concretization of a process . . . . . 7.7.2 Definition of equivalences of processes . . . . . . . . . Processes with composite operators . . . . . . . . . . . . . . 7.8.1 A motivation of the concept of a process with composite operators . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 A concept of a composite operator . . . . . . . . . . 7.8.3 A concept of a process with COs . . . . . . . . . . . 7.8.4 An execution of a process with COs . . . . . . . . . . 7.8.5 Operations on processes with COs . . . . . . . . . . . 7.8.6 Transformation of processes with a message passing to processes with COs . . . . . . . . . . . . . . . . . . . 7.8.7 Sequential composition of COs . . . . . . . . . . . . . 7.8.8 Reduction of processes with COs . . . . . . . . . . . 7.8.9 An example of a reduction . . . . . . . . . . . . . . . 7.8.10 A concretization of processes with COs . . . . . . . . 7.8.11 Equivalences on processes with COs . . . . . . . . . . 7.8.12 A method of a proof of observational equivalence of processes with COs . . . . . . . . . . . . . . . . . . . 7.8.13 An example of a proof of observational equivalence of processes with COs . . . . . . . . . . . . . . . . . . . 7.8.14 Additional remarks . . . . . . . . . . . . . . . . . . . 7.8.15 Another example of a proof of observational equivalence of processes with COs . . . . . . . . . . . . . . Recursive definition of processes . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

176 176 177 177 178 178 178 180 181 181 182 182 183 184 185 185 187 189 194 195

. 196 . 201 . 203 . 208 . 213 216 216 216 216 218 219 224

8 Examples of pro cesses with a message passing 8.1 Separation of sets . . . . . . . . . . . . . . . . . . 8.1.1 The problem of separation of sets . . . . . 8.1.2 Distributed algorithm of separation of sets 8.1.3 The processes Smal l and Large . . . . . . . 8.1.4 An analysis of the algorithm of separation 8.2 Calculation of a square . . . . . . . . . . . . . . . 4

. . . . of .

.. .. .. .. sets ..

. . . . . ..

. . . .

. . . . . .

. . . . . .


8.3

Petri nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 230 230 230 230 231 234 239 239 247 252 255 255 260 266 266 267 268 268 269 271 272

9 Communication proto cols 9.1 The concept of a protocol . . . . . . . . . . . . . . . 9.2 Frames . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 The concept of a frame . . . . . . . . . . . . . 9.2.2 Methods for correcting of distortions in frames 9.2.3 Methods for detection of distortions in frames 9.3 Protocols of one-way transmission . . . . . . . . . . . 9.3.1 A simplest protocol of one-way transmission . 9.3.2 One-way alternating bit protocol . . . . . . . 9.4 Two-way alternating bit protocol . . . . . . . . . . . 9.5 Two-way sliding window protocols . . . . . . . . . . . 9.5.1 The sliding window protocol using go back n . 9.5.2 The sliding window protocol using selective rep

.. .. .. .. .. .. .. .. .. .. .. eat

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

10 History and overview of the current state of the art 10.1 Robin Milner . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 A Calculus of Communicating Systems (CCS) . . . . . . . . 10.3 Theory of communicating sequential processes (CSP) . . . . 10.4 Algebra of communicating processes (ACP) . . . . . . . . . 10.5 Process Algebras . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Mobile Processes . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Other mathematical theories and software tools, associated with a modeling and an analysis of processes . . . . . . . . . 10.9 Business Processes . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. 272 . 273

5


Foreword
This book is based on author's lectures on the theory of processes for students of Faculty of Mathematics and Mechanics and Faculty of Computational Mathematics and Cybernetics of Moscow State University. The book gives a detailed exposition of basic concepts and results of a theory of processes. The presentation of theoretical concepts and results is accompanied with illustrations of their application to solving various problems of verification of processes. Some of these examples are taken from the books [89] and [92]. Along with well-known results there are presented author's results related to verification of processes with message passing, and there are given examples of an application of these results.

6


Chapter 1 Intro duction
1.1 A sub ject of theory of pro cesses

Theory of pro cesses is a branch of mathematical theory of systems, which studies mathematical models of behavior of dynamic systems, called processes. Informally, a pro cess is a model of a behavior, which performs actions. Such actions may be, for example · reception or transmission of any ob jects, or · transformation of these ob jects. The main advantages of theory of processes as a mathematical apparatus designed to modeling and analysis of dynamic systems, are as follows. 1. An apparatus of theory of processes is well suited for formal description and analysis of behavior of distributed dynamic systems, i.e. such systems, which consist of several interacting components, with the following properties: · all these components work in parallel, and · interaction of these components occurs by sending signals or messages from one component to other component. The most important example of a distributed dynamic systems is a computer system. In this system 7


(a) one class of components is determined by a set of computer programs, that are executed in this system, (b) other class of components is associated with a hardware platform, on the base of which the computer programs are executed, (c) the third class of components represents a set information resources (databases, knowledge bases, electronic libraries, etc.) which are used for the operation of this system (d) also it can be taken into account a class of components associated with the human factor. 2. Methods of theory of processes allow to plexity models with very large and even possible due to methodology of symbolic which are symbolic representation of pro analyse with acceptable cominfinite sets of states. This is transformation of expressions cesses.

The most important examples of models with an infinite set of states are models of computer programs with variables, domains of which have very large size. In many cases, models of such programs can be analyzed more easily, if domains of some variables in these models are represented as infinite sets. For example, a domain of variables of the type double is a finite set of real numbers, but this set is very large, and in many cases it is puprosely to replace this finite domain by an infinite domain of all real numbers In some cases a representation of an analyzed program as a model with an infinite set of states greatly simplifies a reasoning about this program. An analysis of a model of this program with a finite (but very large) set of states with use of methods based on explicit or symbolic representation of a set of states can have very high computational complexity, and in some cases a replacement · the problem of an analysing of original finite model · on the problem of an analysing of the corresponding infinite model by methods which are based on symbolic transformations of expressions describing this model can provide a substantial gain in computational complexity. 3. Methods of theory of processes are well suited for investigation of hierarchical systems, i.e. such systems that have a multilevel structure. 8


Each component of such systems is considered as a subsystem, which, in turn, may consist of several subcomponents. Each of these subcomponents can interact · with other subcomponents, and · with higher-level systems. The main sources of problems and ob jects of results of the theory of processes are distributed computer systems. Also the theory of processes can be used for modeling and analysis of behavior of systems of different nature, most important examples of which are organizational systems. These systems include · enterprise performance management systems, · state organizations, · system of organization of commertial processes (for example, management system of commercial transactions, auctions, etc.) The processes relating to behavior of such systems are called businesspro cesses.

1.2

Verification of pro cesses

The most important class of problems, whose solution intended theory of processes, is related to the problem of verification of processes. The problem of verification of a process consists of a constructing a formal proof that analyzed process has the required properties. For many processes this problem is extremely important. For instance, the safe operation of such systems as · control systems of nuclear power stations, · medical devices with computer control · board control systems of aircrafts and spacecrafts · control system of secret databases · systems of e-business 9


is impossible without a satisfactory solution of the problem of verification of correctness and security properties of such systems. A violation of these properties in such systems may lead to significant damage to the economy and the human security. The exact formulation of the problem of verification consist of the following parts. 1. Construction of a process P , which is a mathematical model of behavior of analyzed system. 2. Representation inspected properties in the form of a mathematical object S (called a sp ecification). 3. Construction of a mathematical pro of of a statement that the process P satisfies the specification S .

1.3

Sp ecification of pro cesses

A sp ecification of a process represents a description of properties of this process in the form of some mathematical ob ject. An example of a specification is the requirement of reliability of data transmission through the unreliable medium. It does not specify how exactly should be provided this ensured reliability. For example, the following ob jects can be used as a specification. 1. A logical formula which expresses a requirement for an analysed process. For example, such a requirement may be a condition that if the process has received some request, then the process will give response to this request after a specified time. 2. Representation of an analyzed process on a higher level of abstraction. This type of specifications can be used in multi-level designing of processes: for every level of designing of a process an implementation of the process at this level can be considered as a specification for implementation of this process at the next level of designing.

10


3. A reference process, on which it is assumed that this process has a given property. In this case, the problem of verification consists of a construction of a proof of equivalence of a reference process and an analysed processes. In a construction of specifications it should be guided the following principles. 1. A property of a process can be expressed in different specification languages (SL), and · on one SL it can be expressed in a simple form, and · on another SL it can be expressed in a complex form. For example, a specification that describes a relationship between input and output values for a program that computes the decomposition of an integer into prime factors, has · a complex form in the language of predicate logic, but · a simple form, if this specification is express in the form of a standard program. Therefore, for representation of properties of processes in the form of specifications it is important to choose a most appropriate SL, which allows to write this specification in a most clear and simple form. 2. If a property of a process initially was expressed in a natural language, then in translation of this prorerty to a corresponding formal specification it is important to ensure consistency between · a natural-language description of this property, and · its formal specification, because in case of non-compliance of this condition results of verification will not have a sense.

11


Chapter 2 The concept of a pro cess
2.1 Representation of b ehavior of dynamic systems in the form of pro cesses

One of possible methods of mathematical modeling of a behavior of dynamic systems is to present a behavior of these systems in the form of pro cesses. A process usually does not take into account all details of a behavior of an analyzed system. A behavior can be represented by different processes reflecting · different degrees of abstraction in the model of this behavior, and · different levels of detailization of actions executable by a system. If a purpose of constructing of a process of a system is to check properties of this b of detailization of the system's actions must properties. The construction of a process for an analyzed system should take into account for representation of behavior ehavior, then a choice of level be dependent on the analyzed representation of a behavior of the following principles.

1. A description of the process should not be excessively detailed, because as excessive complexity of this description can cause significant computational problems in formal analysis of this process. 2. A description of the process should not be overly simplistic, it should · to reflect those aspects of a behavior of the simulated system, that are relevant to analyzed properties, and 12


· preserve all those properties of the behavior of this system, that are interesting for analysis because if this condition does not hold, then an analysis of such a process will not make sense.

2.2

Informal concept of a pro cess and examples of pro cesses

Before formulating a precise definition of a process, we give an informal explanation of a concept of a process, and consider simplest examples of processes.

2.2.1

Informal concept of a pro cess

As it was stated above, we understand a process as a model of a behavior of a dynamic system, on some level of abstraction. A pro cess can be thought as a graph P , whose components have the following sense. · Nodes of the graph P are called states and represent situations (or classes of situations), in which a simulated system can be at different times of its functioning. One of the states is selected, it is called an initial state of the process P. · Edges of the graph P have labels. These labels represent actions, which may be executed by the simulated system. · An execution of the process P is described by a walking along the edges of the graph P from one state to another. The execution starts from the initial state. A label of each edge represents an action of the process, executed during the transition from the state at the beginning of the edge to the state at its end.

13


2.2.2

An example of a pro cess

As a first example of a process, consider a process representing the simplest model of behavior of a vending machine. We shall assume that this machine has · a coin acceptor, · a button, and · a tray for output of goods. When a customer wants to buy a good, he · drops a coin into the coin acceptor, · presses the button and then the good appears in the tray. Assume that our machine sells chocolates for 1 coin per each. We describe actions of this machine. · On the initiative of the customer, in the machine may occur the following actions: ­ an input of the coin in the coin acceptor, and ­ a pressing of the button. · In response, the machine can perform reaction: an output of a chocolate on the tray. Let us denote the actions by short names: · an input of a coin we denote by in coin, · a pressing of the button by pr but, and · an output of a chocolate by out choc.

14


Then the process of our vending machine has the following form:
# ! " s d d d d

s

0

in coin
c

d out d d

choc
d d E s2

s

pr but

1

This diagram explains how the vending machine does work: · at first, the machine is in the state s0 , in this state the machine expects an input of a coin in the coin acceptor (the fact that the state s0 is initial, shown in the diagram by a double circle around the identitifier of this state) · when a coin appears, the machine goes to the state s1 and waits for pressing the button · after pressing the button the machine ­ goes to the state s2 , ­ outputs a chocolate, and ­ returns to the state s0 .

2.2.3

Another example of a pro cess

Consider a more complex example of a vending machine, which differs from the previous one that sells two types of goods: tea and coffee, and the cost of tea is 1 ruble, and the cost of coffee is 2 rubles. The machine has two buttons: one for tea, and another for coffee. Buyers can pay with coins in denominations of 1 ruble and 2 ruble. These coins will be denoted by the symbols coin 1 and coin 2, respectively. If a customer dropped in a coin acceptor a coin coin 1, then he can only buy a tea. If he dropped a coin coin 2, then he can buy a coffee or two of 15


tea. Also it is possible to buy a coffee, dropping in a coin acceptor a couple of coins coin 1. A process of such vending machine has the following form:
# out cof s0 ' "! s d d out tea d d
1





in coin 1

s

4

pr but tea
T

in coin 2

c s d d out tea d d

s

s

5

in coin 1

s

3



pr c E s2 pr

but tea


but cof

For a formal definition of a process we must clarify a concept of an action. This clarification is presented in section 2.3.

2.3

Actions

To define a process P , which is a behavior model of a dynamic system, it must be specified a set Act(P ) of actions, which can be performed by the process P . We shall assume that actions of all processes are elements of a certain universal set Act of all possible actions, that can be performed by any process, i.e., for every process P Act(P ) Act A choice of the set Act(P ) of actions of the process P depends on a purpose of a modeling. In different situations, for a representation of a model

16


of an analyzed system in the form of a process it may be choosen different sets of actions. We shall assume that the set Act of actions is subdivided on the following 3 classes. 1. Input actions, which are denoted by symbols of the form ? The action ? is interpreted as an input of an ob ject with the name . 2. Output actions, which are denoted by symbols of the form ! The action ! is interpreted as an output of an ob ject with the name . 3. Internal (or invisible) actions, which are denoted by the symbol . An not i.e. and action of the process P is said to be internal, if this action does related with an interaction of this process with its environment, with processes which are external with respect to the process P , with which it can interact.

For example, an internal action can be due to the interaction of components of P . In fact, internal actions may be different, but we denote all of them by the same symbol . This reflects our desire not to distinguish between all internal actions, because they are not observable outside the process P. Let Names be a set of all names of all ob jects, which can be used in input or output actions. The set N ames is assumed to be infinite. The set Act of all actions, which can be executed by processes, is a disjoint union of the form Act = {? | N ames} {! | N ames} { } (2.1)

Ob jects, which can be used in input or output actions, may have different nature (both material and not material). For example, they may be 17


· material resources, · people · money · information · energy · etc. In addition, the concept of an input and an output can have a virtual character, i.e. the words input and output may only be used as a metaphor, but in reality no input or output of any real ob ject may not occur. Informally, we consider a non-internal action of a process P as · an input action, if this action was caused by a process from an environment of P , and · an output action, if it was caused by P . For each name N ames the actions ? and ! are said to be complementary. We shall use the following notation. 1. For every action a Act \ { } the symbol a denotes an action, which ¯ is complementary to a, i.e. ? = !,
def

! = ?

def

2. For every action a Act \ { } the string name(a) denotes the name specified in the action a, i.e. name(?) = name(!) = 3. For each subset L Act \ { } · L = {a | a L} · names(L) = {name(a) | a L} 18
def def def def


2.4

Definition of a pro cess
P = (S, s0 , R) (2.2)

A pro cess is a triple P of the form

whose components have the following meanings. · S is a set whose elements are called states of the process P . · s0 S is a selected state, called an initial state of the process P . · R is a subset of the form R S â Act â S Elements of R are called transitions. If a transition from R has the form (s1 , a, s2 ), then ­ we say that this transition is a transition from the state s1 to the state s2 with an execution of the action a, ­ states s1 and s2 are called a start and an end of this transition, respectively, and the action a is called a lab el of this transition, and ­ sometimes, in order to improve a visibility, we will denote this transition by the diagram s
a 1

E

s

2

(2.3)

An execution of a process P = (S, s0 , R) is a generation of a sequence of transitions of the form s
0 a0

E

s

a1 1

E

s

a2 2

E

...

with an execution of actions a0 , a1 , a2 . . ., which are labels of these transitions. At every step i 0 of this execution · the process P is in some state si (s0 = s0 ), · if there is at least one transition from R starting at si , then the process P 19


­ non-deterministically chooses a transition from R starting at si , labeled such action ai , which can be executed at the current time (if there is no such transitions, then the process suspends until at least one such transition will occur) ­ performs the action ai , and then ­ goes to the state s
i+1

, which is the end of the selected transition

· if R does not contain transitions starting at si , then the process completes its work. The symbol Act(P ) denotes the set of all actions in Act \ { }, which can be executed by the process P , i.e. Act(P ) = {a Act \ { } | ( s
def a 1

E

s2 ) R }

Process (2.2) is said to be finite, if its components S and R are finite sets. A finite process can be represented graphically as a diagram, in which · each state is represented by a circle in the diagram, and an identifier of this state can be written in this circle · each transition is represented by an arrow connecting beginning of this transition and its end, and a label of this transition is written on this arrow · an initial state is indicated in some way (for example, instead of the usual circle, a double circle is drawn) Examples of such diagrams contain in sections 2.2.2 and 2.2.3.

2.5

A concept of a trace

Let P = (S, s0 , R) be a process. A trace of the process P is a finite or infinite sequence a1 , a2 , . . . of elements of Act, such that there is a sequence of states of the process P s0 , s1 , s2 , . . . with the following properties: 20


· s0 coincides with the initial state s0 of P · for each i 1 the set R contains the transition s
ai i

E

s

i+1

A set of all traces of the process P we shall denote by T r(P ).

2.6

Reachable and unreachable states

Let P be a process of the form (2.2). A state s of the process P is said to be reachable, if s = s0 , or there is a sequence of transitions of P , having the form s
a1 0

E

s1 ,

s

a2 1

E

s2 ,

...

sn-1

an

E

s

n

in which n 1, s0 = s0 and sn = s. A state is said to be unreachable, if it is not reachable. It is easy to see that after removing of all · unreachable states from S , and · transitions from R which does contain these unreachable states the resulting process P (which is referred as a reachable part of the process P ) will represent exactly the same behavior, which is represented by the process P . For this reason, we consider such processes P and P as equal.

2.7
Let

Replacement of states

· P be a process of the form (2.2), · s be a state from S · s be an arbitrary element, which does not belong to the set S .

21


Denote by P a process, which is obtained from P by replacement s on s in the sets and S R, i.e. every transition in R of the form s is replaced by a transition s
a a

E

s

1

or

s

a 1

E

s

E

s

1

or

s

a 1

E

s

respectively. As in the previous section, it is easy to see that P and P represent the same behavior, and for this reason, we can consider such processes P and P as equal. It is possible to replace not only one state, but arbitrary subset of states of the process P . Such a replacement can be represented as an assignment of a bijection of the form f :SS (2.4) and the result of such replacement is by definition a process P of the form P = (S , (s )0 , R ) where · (s )0 = f (s0 ), and · for each pair s1 , s2 S and each a Act (s
a 1 def

(2.5)

E

s2 ) R



( f ( s1 )

a

E

f (s2 ) ) R .

Since the processes P and P represent the same behavior, we can treat them as equal. In the literature on the theory of processes such processes P and P sometimes are said to be isomorphic. Bijection (2.4) with the above properties is called an isomorphism between P and P . The process P is said to be an isomorphic copy of the process P .

22


Chapter 3 Op erations on pro cesses
In this chapter we define several algebraic operations on the set of processes.

3.1

Prefix action

The first such operation is called a prefix action, this is an unary operation denoted by "a.", where a is an arbitrary element of Act. Let P = (S, s0 , R) be a process and a Act. An effect of the operation a. on the process P results to the process, which has the following components: · a set of states of a.P is obtained from S by an adding a new state s S · an initial state of a.P is the added state s · a set of transitions of a.P is obtained from R by adding a new transition of the form aE s0 s The resulting process is denoted by a.P We illustrate an effect of this operation on the example of a vending machine presented at section 2.2.2. Denote the process, which represents a behavior of this automaton, by Pvm .

23


Extend the set of actions of the vending machine by a new input action enable ? which means an enabling of this machine. The process enable ?. Pvm represents a behavior of the new vending machine, which in the initial state can not · accept coins, · perceive pressing the button, and · output chocolates. The only thing that he can is to be enabled. After that, its behavior will be no different from that of the original machine. A graph representation of enable ?. Pvm has the following form:
#

! "

s

enable ? E s

s d d d d
0

coin?

d chocolate ! d d d c d button? E s s1 2

3.2

Empty pro cess

Among all the processes, there is one the most simple. This process has only one state, and has no transitions. To indicate such a process we use a constant (i.e. a 0-ary operation) 0. Returning to examples with vending machines, it can be said that the process 0 represents a behavior of a broken vending machine, that is such a machine, which can not execute any action.

24


By applying the operations of prefix action to the process 0 it is possible to define a behavior of more complex machines. Consider, for example, the following process: P = coin ?.button ?.chocolate !. 0 A graph representation of this process is as follows:
# ! " E s1
2

s

0

coin ?

button ? E s

chocolate ! E s


3

This process defines a behavior of a vending machine, which serves exactly one customer, and after this breaks.

3.3

Alternative comp osition

Next operation on processes is a binary operation, which is called an alternative comp osition. This operation is used in the case when, having a pair of processes P1 and P2 , we must construct a process P , which will operate · either as the process P1 , · or as the process P2 , and the choice of a process, according to which P will operate, can be determined · either by P itself, · or by an environment in which P does operate. For example, if P1 and P2 have the form P1 = ? . P1 P2 = ? . P2 and at the initial time an environment of P · can give P the ob ject , but · can not give P the ob ject 25 (3.1)


then P will choose a behavior which is only possible in this situation, i.e. will operate according to the process P1 . Note that in this case it is chosen such a process, first action in which can be executed in the current time. After choosing of P1 , and execution of the action ?, the process P is obliged to continue its work according to this choice, i.e. it will operate like P1 . It is possible, that after execution of the action ? · P will not be able to execute any action, working in accordance with P1 · though at this time P will be able to execute an action, working in accordance with P2 . But at this time P can not change his choice (i.e. can not choose P2 instead of P1 ). P can only wait until it will be possible to work in accordance with P1 . If in the initial time the environment can give P both and , then P chooses a process whereby it will work, · non-deterministically (i.e., arbitrarily), or · sub ject to some additional factors. The exact definition of the operation of alternative composition is as follows. Let P1 and P2 be processes of the form Pi = (Si , s0 , Ri ) (i = 1, 2) i and the sets of states S1 and S2 have no common elements. An alternative comp osition of processes P1 and P2 is a process P1 + P2 = (S, s0 , R) whose components are defined as follows. · S is obtained by adding to the union S1 S2 a new state s0 , which is an initial state of P1 + P2 · R contains all transitions from R1 and R2 , and 26


· for each transition in Ri (i = 1, 2) s R contains the transition s
0 a 0 i a

E

s

E

s

If S1 and S2 have common elements, then to define a process P1 + P2 you first need to replace in S2 those states that are also in S1 on new states, and also modify accordingly R2 and s0 . 2 Consider, for example, vending machine which sells two types of drinks: cola and fanta, and · if a customer puts in a coin coin 1, then the machine issues a glass of cola, and · if a customer puts in a coin coin 2, then a machine gives a glass of fanta with the machine is broken immediately after the sale of one glass of a drink. A behavior of this automaton is described by the following process: P
drink

= coin 1 ? . cola ! . 0 + + coin 2 ? . fanta ! . 0

(3.2)

Consider a graph representation of process (3.2).

27


Graph representation of terms in the sum (3.2) have the form
# ! "

s

# ! "

10

s

20

coin 1 ?
c

coin 2 ?
c

s

11

s

21

cola !
c

fanta !
c

s

12

s

22

According to a definition of an alternative composition, a graph representation of process (3.2) is obtained by adding to the previous diagram a new state and the corresponding transitions, to result in the following diagram:
! " d d coin 1 ? c doin 2 ? d coin 1? d coin 2 ? d d d c dc ©

s

#



10

s

0

s

20

s

11



s

21

cola !
c

fanta !
c

s

12

s

22

28


Since the states s10 and s20 are unreachable, it follows that it is possible to delete them and transitions associated with them, resulting in a diagram
# ! " d d d

s

0

coin 1 ?
©

c doin d d

2?

s

d d d

11

s

21

cola !
c

fanta !
c

s

12

s

22

which is the desired graph representation of process (3.2). Consider another example. We describe an exchange machine, which can enter banknotes in denominations of 100 dollars. The machine shall issue · either 2 banknotes on 50 dollars, · or 10 banknotes on 10 dollars and the choice of method of an exchange is carried regardless of the wishes of the customer. Just after one session of an exchange the machine is broken. Pexchange = = 1 on 1000 ? .(2 on 500 ! .0 + 10 on 100 ! .0) These two examples show that the operation of an alternative composition can be used to describe at least two fundamentally different situations. 1. First, it can express a dependence of system behavior from the behavior of its environment.

29


For instance, in the case of a vending machine Pdrink a behavior of the machine is determined by an action of a purchaser, namely by a denomination of a coin, which a purchaser introduced into the machine. In this case, a process representing a behavior of the simulated vending machine is deterministic, i.e. its behavior is uniquely determined by input actions. 2. In the second, on an example of a machine Pexchange we see that for the same input action is possible different response of the machine. This is an example of a nondeterminism, i.e. an uncertainty of a behavior of a system. Uncertainty in a behavior of systems can occur by at least two reasons. (a) First, a behavior of systems may depend on random factors. These factors can be, for example, · · · · failures in hardware, conflicts in a computer network absence of banknotes of required value at an ATM or anything else

(b) Second, a model is always some abstraction or simplification of a real system, and some of the factors influencing a behavior of this system may be eliminated from a consideration. In particular, on the example of Pexchange we see that a real reason of choosing of a variant of behavior of the machine can be not taken into account in the process, which is a model of a behavior of this machine. One can schematically represent the above variants of using alternative

30


composition as follows: Dependence on the input data
Alternative composition d d d

Random factors Nondeterminism
d d d

Unknown factors

3.4

Parallel comp osition

The operation of parallel composition is used for building models of behavior of dynamic systems, composed of several communicating components. Before giving a formal definition of this operation, we will discuss the concept of parallel working of a pair of systems S y s1 and S y s2 , which we consider as components of a system S y s, i.e. S y s = {S y s1 , S y s2 }
def

(3.3)

Let processes P1 and P2 represent behaviors of the systems S y s1 and S y s2 respectively. When the system S y si (i = 1, 2) works as a part of the system S y s, its behavior is described by the same process Pi . Denote by {P1 , P2 } a process, describing a behavior of (3.3). The purpose of this section is to find an explicit description of {P1 , P2 } (i.e. to define a sets of its states and transitions). Here to simplify the exposition, we identify the concepts "a process P ", and "a system whose behavior is described by a process P " As noted a bypassing actions that We shall above, an execution of arbitrary process can be interpreted as of a graph corresponding to this process, with an execution of are labels of passable edges. aE s assume that in passage of each edge s 31


· a transition from s to s occurs instantaneously, and · an execution of the action a occurs precisely at the time of this transition. In fact, an execution of each action occurs within a certain period of time, aE s but we shall assume that for each traversed edge s · before the completion of an execution of the action a the process P is in the state s, and · after the completion of an execution of a the process P instantly transforms into the state s . Since an execution of various actions has different durations, then we will assume that the process P is in each state an indefinite period of time during its execution. Thus, an execution of the process P consists of an alternation of the following two activities: · waiting for an indefinite period of time in one of the states, and · instantaneous transition from one state to another. Waiting in one of the states can occur · not only because there is an execution of some action at this time, · but also because the process P can not perform any action at this time. For example, if · P = ?. P , and · in the initial time there is no a process who can give P an oblect with the name then P would wait until some process will give him an oblect with the name . As we know, for each process · its actions are either input, or output, or internal, and 32


· each input or output action is a result of a communication of this process with other process. Each input or output action of the process Pi (i = 1, 2) · either is a result of communication of Pi with a process outside of the set {P1 , P2 }, · or is a result of communication of Pi with the process Pj , where j {1, 2} \ {i}. From the point of view of the process {P1 , P2 }, actions of the second type are internal actions of this process, because they · are not a result of a communication of the process {P1 , P2 } with its environment, and · are the result of communication between the components of this process. Thus, each step of the process {P1 , P2 } (a) either is a result of a comminication of one of the processes Pi (i = 1, 2) with a process outside of {P1 , P2 }, (b) or is an internal action of P1 or P2 , (c) or is an internal action, which is a result of a communication of P1 and P2 , and this communication has the following form: ­ one of these processes, say Pi , passes to another process P {1, 2} \ {i}) some ob ject, and
j

(j

­ the process Pj at the same time takes this ob ject from the process Pi (This kind of a communication is called a synchronous communication, or a handshaking). Each possible variant of a behavior of the process Pi (i = 1, 2) can be associated with a thread denoted by the symbol i . A thread is a vertical line, on which there are drawn points with labels, where 33


· labels of points represent actions executed by the process Pi , and · labelled points are arranged in a chronological order, i.e. ­ first point is labelled by a first action of the process Pi , ­ second point (which is located under the first point) is labelled by a second action of the process Pi , ­ etc. For each labelled point p on the thread, we denote by act(p) a label of this point. Assume that there is drawn on a plane a couple of parallel threads 1 2 (3.4)

where i (i = 1, 2) represents a possible variant of a behavior of the process Pi in the process {P1 , P2 }. Consider those labelled points on the threads from (3.4), which correspond to actions of the type (c), i.e. to communications of processes P1 and P2 . Let p be one of such points, and, for example, it is on the thread 1 . According to the definition of a communication, at the same time, in which there is executed the action act(p), the process P2 executes a complementary action, i.e. there is a point p on the thread 2 , such that · act(p ) = act(p), and · actions act(p) and act(p ) execute at the same time. Note that · in the thread 2 may be several points with the label act(p), but exactly one of these points corresponds to the action, which is executed jointly with the action corresponding to the point p, and · in the thread 1 may be several points with the label act(p), but exactly one of these points corresponds to the action, which is executed jointly with the action corresponding to the point p . Transform our diagram of threads (3.4) as follows: for each pair of points p, p with the above properties 34


· join the points p and p by an arrow, ­ the start of which is the one of these points, which has a label of the form !, and ­ the end of which is another of these points · draw a label on this arrow, and · replace labels of the points p and p on . The arrow from p to p is called a synchronization arrow. Such arrows usually are drawn horizontally. After such changes for all pairs of points, which are labelled by actions of the type (c), we will obtain a diagram, which is called a Message Sequence Chart (MSC). This diagram represents one of possible variants of execution of the process {P1 , P2 }. We shall denote a set of all MSCs, each of which corresponds to some variant of execution of the process {P1 , P2 }, as B eh{P1 , P2 } Consider the following example of a process of the form {P1 , P2 }: · P1 is a model of a vending machine, whose behavior is given by P1 = coin ?. chocolate !.0 (i.e., the machine gets a coin, gives a chocolate, and then breaks) · P2 is a model of a customer, whose behavior is given by P2 = coin !. chocolate ?.0 (3.6) (3.5)

(i.e., the customer drops a coin, receives a chocolate, and then ceases to function as customer).

35


Threads of these processes have the form


coin ?

coin !









chocolate !

chocolate ?

If all actions on these threads are actions of the type (c), then this diagram can be transformed into the following MSC:
'



coin









chocolateE





However, it is possible the following variant of execution of the process {P1 , P2 }: · first actions of P1 and P2 are of the type (c), i.e. the customer drops a coin, and the machine accepts the coin · second action of automaton P1 is a communication with a process that is external with respect to {P1 , P2 } (that is, for example, a thief walked up to the machine, and took a chocolate, before than the customer P2 was able to take it) In this situation, the customer can not execute a second action as an internal action of {P1 , P2 }. According to a description of the process P2 , in this case two variants of behavior of the customer are possible. 36


1. The customer will be in a state of endless waiting. The corresponding MSC has the form
'



coin







chocolate !

2. The customer will be able successfully complete its work. This would be the case if some process external to to {P1 , P2 } will give a chocolate to the customer. The corresponding MSC has the form
'



coin











chocolate !

chocolate ?

Now consider the general question: how a can be defined explicitly, i.e. in terms of states At first glance, this question is incorrect, model of a parallel execution of the processes

process of the form {P1 , P2 } and transitions. because {P1 , P2 } must be a P1 and P2 , in which

· it can be possible a simultaneous execution of actions by both processes P1 , P 2 ,

37


· and, therefore, the process {P1 , P2 } can execute such actions, which are pairs of actions from the set Act, which can not belong to the set Act (by assumption). Note on this, that absolute simultaneity holds only for those pairs of actions that generate an internal action of the process {P1 , P2 } of the type (c). For all other pairs of actions of the processes P1 and P2 , even if they occurred simultaneously (in terms of external observer), we can assume without loss of generality, that one of them happened a little earlier or a little later than another. Thus, we can assume that the process {P1 , P2 } executes consequentially, i.e. under any variant of an execution of the process {P1 , P2 } actions executed by them form some linearly ordered sequence tr = (act1 , act2 , . . .) (3.7)

in which the actions are ordered by the time of their execution: at first it was executed act1 , then - act2 , etc. Because each possible variant of an execution of the process {P1 , P2 } can be represented by a MSC, then we can assume that sequence (3.7) can be obtained by some linearization of this MSC (i.e., by "pulling" it in a chain). For a definition of a linearization of a MSC we introduce some auxiliary concepts and notations. Let C be a MCS. Then · P oints(C ) denotes a set of all points belonging to the MSC C , · for each point p P oints(C ) act(p) denotes an action, ascribed to the point p · for each pair of points p, p P oints(C ) the formula pp means that one of the following conditions does hold: ­ p and p are in the same thread, and p is lower than p, or ­ there is a synchronization arrow from p to p

38


· for each pair of points p, p P oints(C ) the formula pp means that either p = p , or there is a sequence of points p1 , . . . , pk , such that ­ p = p1 , p = p
k i+1

­ for each i = 1, . . . , k - 1 pi p

The relation on points of a MSC can be regarded as a relation of a chronological order, i.e. the formula p p can be interpreted as stating that · the points p and p are the same or connected by a synchronization arrow (i.e. actions in p and p coincide) · or an action in the p occurred later than there was an action in the p. The exact definition of a linearization of a MSC has the following form. Let · C be a MSC, · tr be a sequence of actions of the form (3.7), and · I nd(tr) be a set of indices of elements of the sequence tr, i.e. I nd(tr) = {1, 2, . . .} (this set can be finite or infinite) The sequence tr is called a linearization of the MSC C , if there is a surjective mapping lin : P oints(C ) I nd(tr) satisfying the following conditions. 1. for each pair p, p P oints(C ) pp lin(p) lin(p )

39


2. for each pair p, p P oints(C ) the equality lin(p) = lin(p ) holds if and only if · p = p , or · there is a synchronization arrow from p to p 3. p P oints(C ) act(p) = actl i.e. the mapping lin · preserves the chronological order · identifies those points of the MSC C , which correspond to one action of {P1 , P2 }, and · does not identify any other points. Denote by Lin(C ) the set of all linearizations of the MSC C . Now the problem of explicit description of the process {P1 , P2 } can be formulated as follows: construct a process P , satisfying the condition T r(P ) =
C B eh{P1 ,P2 } in(p)

.

Lin(C )

(3.8)

i.e. in the process P should be represented all linearizations of any possible joint behavior of processes P1 and P2 . Condition (3.8) is justified by the following consideration: because we do not know · how clocks in the processes P1 and P2 are related, and · what is a length of a stay in each state in which these processes fall then we must take into account every possible order of an execution of actions, which does not contradict to the relation of a chronological order. Begin the construction of a process P , satisfying condition (3.8). Let the processes P1 and P2 have the form Pi = (Si , s0 , Ri ) (i = 1, 2) i 40


Consider any linearization tr of an arbitrary MSC from B eh{P1 , P2 } tr = ( a1 , a2 , . . . ) Draw a line, which will b line points p1 , p2 , . . . labelled these actions are located on listed in tr. Let the symbols I0 , I1 , I2 , e interpreted as a scale of time. Select on this by the actions a1 , a2 , . . . respectively, such that the line in the same order in which they are . . . denote the following sections of this line:

· I0 is the set of all points of the line before the point p1 , i.e. I0 = ] - , p1 [ · for each i 1 the plot Ii consists of points between pi and p Ii = ]pi , pi+1 [ Each of these sections Ii can be interpreted as an interval of time during which the process P does not perform any action, i.e. at times between pi and pi+1 the processes P1 and P2 are in fixed states (s1 )i and (s2 )i , respectively. Denote by si the pair ((s1 )i , (s2 )i ). This pair can be interpreted as a state of the process P , in which he is at each time from the interval Ii . By the definition of the sequence tr, we have one of two situations. 1. The action ai has a type (a) or (b), i.e. ai was executed by one of the processes included in P . There are two cases. (a) The action ai was executed by the process P1 . In this case we have the following relation between the states s and si+1 :
ai E (s1 )i · (s1 )i · (s2 )i+1 = (s2 )i +1 def i+1 def

, i.e.

i

R1

(b) The action ai was executed by the process P2 . In this case we have the following relation between the states s and si+1 : 41

i


ai E (s2 )i · (s2 )i · (s1 )i+1 = (s1 )i

+1

R2

2. The action ai is of the type (c). In this case we have the following relation between the states si and si+1 : · (s1 ) · (s2 )
a i a i

E E

(s1 )i (s2 )i

+1 +1

R1 R2

for some a Act \ { }. The above properties of the sequence tr can be reformulated as follows: tr is a trace of the process (S , s 0 , R ) (3.9) whose components are defined as follows: · S = S1 â S2 = {(s1 , s2 ) | s1 S1 , s2 S2 } · s0 = (s0 , s0 ) 12 · for ­ each transition s ­ each state s S
2 a 1 def def def

E

s1 from R1 , and

R contains the transition (s1 , s) · for ­ each transition s ­ each state s S
1 a 2 a

E

(s1 , s)

E

s2 from R2 , and

R contains the transition (s, s2 )
a

E

(s, s2 )

42


· for each pair of transitions with complementary labels s s R contains the transition (s1 , s2 )
a 1 2 a

E E

s s

1 2

R1 R2

E

(s1 , s2 )

It is easy to show the converse: each trace of process (3.9) is a linearization of some MSC C from the set B eh{P1 , P2 }. Thus, an explicit representation of the process P = {P1 , P2 } can be defined as process (3.9). This process is called a parallel comp osition of the processes P1 and P2 , an is denoted as P1 | P
2

We give an example of the process P1 | P2 , in the case where the processes P1 and P2 represent behaviors of a vending machine and a customer (see (3.5) and (3.6)). A graph representation of these processes have the form
# ! "

s

# ! "

10

s

20

coin ?
c

coin !
c

s

11

s

21

chocolate !
c

chocolate ?
c

s

12

s

22

43


A graph representation of the process P1 |P2 has the form
# "

(s10 , s20 )

! d d d coin ? d d

coin ! E(s , s ) chocolate E(s , s ) ? 10 21 10 22










coin ?

coin ?

c

(s11 , s20 )


coin



chocolate !
c

(s12 , s20 )


coin



d d d c c ! E(s , s ) chocolate E(s , s ) ? 11 21 11 22 d d d d chocolate d chocolate ! dd d c c ! E(s , s ) chocolate E(s , s ) ? 12 21 12 22

!

Note that a size of the set of states of P1 |P2 is equal to a product of sizes of sets of states of P1 and P2 . Thus, a size of a description of the process P1 | P2 may substantially exceed the total complexity of sizes of descriptions of its components, P1 and P2 . This may make impossible to analyze this process, if it is represented in an explicit form, because of its high complexity. Therefore, in practical problems of an analysis of processes of the form P1 | P2 , instead of an explicit construction of P1 | P2 there is constructed a process, in which each MSC from B eh{P1 , P2 } · is not represented by all possible linearizations, but · is represented by at least one linearization. A complexity of such process can be significantly less in comparison with a complexity of the process P1 |P2 . A construction of a process of this kind makes sense, for example, if an analyzed property of the process P1 | P2 has the following quality: for arbitrary C B eh{P1 , P2 } · if holds for one of linearizations of C , 44


· then holds for al l linearizations of C . Typically, a process in which each MSC from B eh{P1 , P2 } is represented by at least one linearization, is constructed as a certain subpro cess of the process P1 |P2 , i.e. is obtained from P1 |P2 by removing of some states and associated transitions. Therefore, such processes are said to be reduced. The problem of constructing of reduced processes is called a partial order reduction. This problem has been intensively studied by many leading experts in the field of verification. Consider, for example, a reduced process for the above process P1 | P2 , consisting of a vending machine and the customer.
# "

(s10 , s20 )

! d d d d d

coin ! E(s , s ) 10 21






coin?
d d d c

? (s11 , s21 ) chocolate E(s11 , s22 )
d d d d


d

chocolate !
d d d c

(s12 , s22 )




In conclusion, we note that the problem of analyzing of processes consisting of several communicating components, most often arises in situations where such components are computer programs and hardware devices of a computer system. A communication between programs in such system is implemented by mediators, i.e. by certain processes which can communicate synchronously with programs. Communications between programs are usually implemented by the following two ways. 1. Communication through shared memory. 45


In this case, mediators are memory cells accessed by both programs. A communication in this case can be implemented as follows: one program writes an information in these cells, and other program reads contents of cells. 2. Communicaton by sending messages. In this case, a mediator is a channel, which can be used by programs for the following actions: · sending a message to the channel, and · receiving of a message from the channel. The channel may be implemented as a buffer storing several messages. Messages in the channel can be organized on the principle of queue (i.e., messages leave the channel in the same order in which they had come).

3.5
Let

Restriction

· P = (S, s0 , R) be a process, and · L be a subset of the set N ames. A restriction of P with respect to L is the process P \ L = (S, s0 , R ) which is obtained from P by removing of those transitions that have labels with the names from L, i.e. R=
def

(s

a

E

s )R

a = , or name(a) L

As a rule, the operation of a restriction is used together with the operation of parallel composition, for representation of processes that · consist of several communicating components, and 46


· a communication between these components must satisfy certain restrictions. For example, let processes P1 and P2 represent a behavior of a vending machine and a customer respectively, which were discussed in the previous section. We would like to describe a process, which is a model of such parallel execution of processes P1 and P2 , at which these processes can execute actions associated with buying and selling of a chocolate only jointly. The desired process can be obtained by an application to the process P1 |P2 the operation of a restriction with respect to the set of names of all actions related to buying and selling of a chocolate. This process is described by the expression def P = (P1 |P2 ) \ {coin, chocolate} (3.10) A graph representation of process (3.10) has the form
# " ! d d d d d

(s10 , s20 )

(s10 , s21 )

(s10 , s22 )



d d d

d d d d





(s11 , s20 )

(s11 , s21 )

(s11 , s22 )


d d d d











(s12 , s20 )

(s12 , s21 )

(s12 , s22 )

After removing unreachable states we get a process with the following graph representation:
# " !

(s10 , s20 )



E(s11 , s21 )



E(s12 , s22 )

47


Consider another example. Change a definition of a vending machine and a customer: let them also to send a signal indicating successful completion of their work. For example, these processes may have the following form: P1 = coin?.chocolate !.clank !.0 def P2 = coin!.chocolate ?.hurrah !.0 In this case, a graph representation of process (3.10), after a removal of unreachable states, has the form
# ! "
def



E



E

clank !

E

hurrah !
c

hurrah !
c E

clank !

This process allows execution only those non-internal actions that are not related to buying and selling a chocolate. Note that in this case · in process (3.10) a nondeterminism is present, although · in the components of P1 and P2 a nondeterminism is absent. The cause of a nondeterminism in (3.10) is our incomplete knowledge about the simulated system: because we do not have a precise knowledge about a duration of actions clank ! and hurrah !, then the model of the system should allow any order of execution of these actions.

3.6

Renaming

The last operation that we consider is an unary operation, which is called a renaming. To define this operation, it is necessary to define a mapping of the form f : N ames N ames 48 (3.11)


An effect of the operation of renaming on process P is changing labels of transitions of P : · any label of the form ? is replaced on f () ?, and · any label of the form ! is replaced on f () ! The resulting process is denoted by P [f ]. We shall refer any mapping of the form (3.11) also as a renaming. If a renaming f acts non-identically only on the names 1 , . . . , and maps them to the names 1 , . . . ,
n n

respectively, then the process P [f ] can be denoted also as P [1 /1 , . . . , n /n ] The operation of renaming can be used, for example, in the following situation: this operation allows to use several copies of a process P as different components in constructing of a more complex process P . Renaming serves for prevention of collisions between names of actions used in different occurrences of P in P .

3.7

Prop erties of op erations on pro cesses

In this section we give some elementary properties of defined above operations on processes. All these properties have a form of equalities. For the first two properties, we give their proof, other properties are listed without comments in view of their evidence. Recall (see section 2.7), that we consider two processes as equal, if · they are isomorphic, or · one of these processes can be obtained from another by removing some of unreachable states and transitions which contain unreachable states.

49


1. Operation + is associative, i.e. for any processes P1 , P2 and P3 the following equality holds: (P1 + P2 ) + P3 = P1 + (P2 + P3 ) Indeed, let the processes Pi (i = 1, 2, 3) have the form Pi = (Si , s0 , Ri ) (i = 1, 2, 3) i (3.13) (3.12)

and their sets of states S1 , S2 and S3 are pairwise disjoint. Then both sides of equality (3.12) are equal to the process P = (S, s0 , R), whose components are defined as follows: · S = S1 S2 S3 {s0 }, where s0 is a new state (which does not belong to S1 , S2 and S3 ) · R contains all transitions from R1 , R2 and R3 · for each transition from Ri (i = 1, 2, 3) of the form s R contains the transition s
0 i a def

E
a

s s

0

E

The property of associativity of the operation + allows to use expressions of the form P1 + . . . + Pn (3.14) because for any parenthesization of the expression (3.14) we shall get one and the same process. A process, which is a value of expression (3.14) can be described explicitly as follows. Let the processes Pi (i = 1, . . . , n) have the form Pi = (Si , s0 , Ri ) (i = 1, . . . , n) i (3.15)

with the sets of states S1 , . . . , Sn are pairwise disjoint. Then a process, which is a value of the expression (3.14), has the form P = (S, s0 , R) where the components S, s0 , R are defined as follows: 50


· S = S1 . . . Sn {s0 }, where s0 is a new state (which does not belong to S1 . . . , Sn ) · R contains all transitions from R1 , . . . , R
a n

def

· for each transition from Ri (i = 1, . . . , n) of the form s R contains the transition s
0 i

E
a

s s

0

E

2. The operation | is associative, i.e. for any processes P1 , P2 and P3 the following equality holds: (P1 | P2 ) | P3 = P1 | (P2 | P3 ) (3.16)

Indeed, let the processes Pi (i = 1, 2, 3) have the form (3.13). Then both sides of (3.16) are equal to the process P = (S, s0 , R) whose components are defined as follows: · S = S1 â S2 â S3 = def = {(s1 , s2 , s3 ) | s1 S1 , s2 S2 , s3 S3 } · s0 = (s0 , s0 , s0 ) 123 · for
aE ­ each transition s1 s1 from R1 , and ­ each pair of states s2 S2 , s3 S3 def def def

R contains the transition (s1 , s2 , s3 ) · for
aE ­ each transition s2 s2 from R2 , and ­ each pair of states s1 S1 , s3 S3 a

E

(s1 , s2 , s3 )

R contains the transition (s1 , s2 , s3 ) · for 51
a

E

(s1 , s2 , s3 )


aE s3 from R3 , and ­ each transition s3 ­ each pair of states s1 S1 , s2 S2

R contains the transition (s1 , s2 , s3 ) · for ­ each pair of transitions with complementary labels s s and ­ each state s3 S3 R contains the transition (s1 , s2 , s3 ) · for ­ each pair of transitions with complementary labels s s and ­ each state s2 S2 R contains the transition (s1 , s2 , s3 ) · for ­ each pair of transitions with complementary labels s s and ­ each state s1 S1 52
a 2 3 a a 1 3 a a 1 2 a a

E

(s1 , s2 , s3 )

E E

s s

1 2

R1 R2

E

(s1 , s2 , s3 )

E E

s s

1 3

R1 R3

E

(s1 , s2 , s3 )

E E

s s

2 3

R2 R3


R contains the transition (s1 , s2 , s3 )


E

(s1 , s2 , s3 )

The property of associativity of the operation | allows to use expressions of the form P1 | . . . | Pn (3.17) because for any parenthesization of the expression (3.17) we shall get one and the same process. A process, which is a value of expression (3.17) can be described explicitly as follows. Let the processes Pi (i = 1, . . . , n) have the form (3.15). Then a process, which is a value of the expression (3.17), has the form P = (S, s0 , R) where the components S, s0 , R are defined as follows: · S = S1 â . . . â Sn = def = {(s1 , . . . , sn ) | s1 S1 , . . . , sn Sn } · s0 = (s0 , . . . , s0 ) n 1 · for ­ each i {1, . . . , n} ­ each transition si ­ each list of states
a def def def

E

si from Ri , and

s1 , . . . , s

i-1

, si+1 , . . . , s

n

where j {1, . . . , n} sj Sj R contains the transition (s1 , . . . , sn ) · for ­ each pair of indices i, j {1, . . . , n}, where i < j
a

E

( s1 , . . . , s

i-1

, si , si+1 , . . . , sn )

53


­ each pair of transitions with complementary labels of the form s s and ­ each list of states s1 , . . . , s
i-1 a i j a

E E

si sj

Ri Rj

, si+1 , . . . , s

j -1

, sj +1 , . . . , s

n

where k {1, . . . , n} sk Sk R contains the transition (s1 , . . . , sn )


E

s1 , . . . , si-1 , si , si+1 , . . . , s sj +1 , . . . , sn

j -1

, sj ,

3. The operation + is commutative, i.e. for any processes P1 and P2 the following equality holds: P1 + P2 = P 2 + P
1

4. The operation | is commutative, i.e. for any processes P1 and P2 the following equality holds: P1 | P 2 = P 2 | P
1

5. 0 is a neutral element with respect to the operation | : P |0 = P The operation + has a similar property, in this property there is used a concept of strong equivalence of processes (defined below) instead of equality of processes . This property, as well as the property of idempotency of the operation + are proved in section 4.5 (theorem 4). 6. 0 \ L = 0 7. 0[f ] = 0 8. P \ L = P , if L names(Act(P )) = . (recall that Act(P ) denotes a set of actions a Act \ { }, such that P contains a transition with the label a) 54


9. (a.P ) \ L =

0, if a = and name(a) L a.(P \ L), otherwise

10. (P1 + P2 ) \ L = (P1 \ L) + (P2 \ L) 11. (P1 | P2 ) \ L = (P1 \ L) | (P2 \ L), if L names(Act(P1 ) Act(P2 )) = 12. (P \ L1 ) \ L2 = P \ (L1 L2 ) 13. P [f ] \ L = (P \ f
-1

(L))[f ]

14. P [id] = P , where id is an identity function 15. P [f ] = P [g ], if restrictions of functions f and g on the set names(Act(P )) are equal. 16. (a.P )[f ] = f (a).(P [f ]) 17. (P1 + P2 )[f ] = P1 [f ] + P2 [f ] 18. (P1 | P2 )[f ] = P1 [f ] | P2 [f ], if a restriction of f on the set names(Act(P1 ) Act(P2 )) is an injective mapping. 19. (P \ L)[f ] = P [f ] \ f (L), if the mapping f is an injective mapping. 20. P [f ][g ] = P [g f ]

55


Chapter 4 Equivalences of pro cesses
4.1 A concept of an equivalence of pro cesses

The same behavior can be represented by different processes. For example, consider two processes:
# a¤ ' ! ¥ " #

! "

aE



aE



aE ...

The first process has only one state, and the second has infinite set of states, but these processes represent the similar behavior, which consists of a perpetual execution of the actions a. One of important problems in the theory of processes consists of a finding of an appropriate definition of equivalence of processes, such that processes are equivalent according to this definition if and only if they represent a similar behavior. In this chapter we present several definitions of equivalence of processes. In every particular situation a choice of an appropriate variant of the concept of equivalence of processes should be determined by a particular understanding (i.e. related to this situation) of a similarity of a behavior of processes. In sections 4.2 and 4.3 we introduce concepts of trace equivalence and strong equivalence of processes. These concepts are used in situations where all actions executing in the processes that have equal status. 56


In sections 4.8 and 4.9 we consider other variants of the concept of equivalence of processes: namely, observational equivalence and observational congruence. These concepts are used in situations when we consider the invisible action as negligible, i.e. when we assume that two traces are equivalent, if one of them can be obtained from another by insertions and/or deletions of . With each possible definition of equivalence of processes there are related two natural problems. 1. Recognition for two given processes, whether they are equivalent. 2. Construction for a given process P such a process P , which is the least complicated (for example, has a minimum number of states) among all processes that are equivalent to P .

4.2

Trace equivalence of pro cesses

As mentioned above, we would like to consider two processes as equivalent, if they describe a same behavior. So, if we consider a behavior of a process as a generation of a trace, then one of necessary conditions of equivalence of processes P1 and P2 is coincidence of sets of their traces: T r(P1 ) = T r(P2 ) (4.1)

In some situations, condition (4.1) can be used as a definition of equivalence of P1 and P2 . However, the following example shows that this condition does not reflect one important aspect of an execution of processes.
# ! " # ! " ¡e a¡ ea ¡ e c

a
c ¡e b¡ ec ¡ e ¡ e

b

c
c

(4.2)

57


Sets of traces of these processes are equal: T r(P1 ) = T r(P2 ) = {, a, ab, ac} (where is an empty sequence). However, these processes have the following essential difference: · in the left process, after execution of a first action (a) there is a possibility to choose next action (b or c), while · in the right process, after execution of a first action there is no such possibility: ­ if a first transition occurred on the left edge, then a second action can only be the action b, and ­ if a first transition occurred on the right edge, then a second action can only be the action c i.e. a second action was predetermined before execution of a first action. If we do not wish to consider these processes as equivalent, then condition (4.1) must be enhanced in some a way. One version of such enhancement is described below. In order to formulate it, define the notion of a trace from a state of a process. Each variant of an execution of a process P = (S, s0 , R) we interpret as a generation of a sequence of transitions s
a1 0

E

s

a2 1

E

s

a3 2

E

...

(4.3)

starting from the initial state s0 (i.e. s0 = s0 ). We can consider a generation of sequence (4.3) not only from the initial state s0 , but from arbitrary state s S , i.e. consider a sequence of the form (4.3), in which s0 = s. The sequence (a1 , a2 , . . .) of labels of these transitions we shall call a trace starting at s. A set of all such traces we denote by T rs (P ). Let P1 and P2 be processes of the form Pi = (Si , s0 , Ri ) i (i = 1, 2)

Consider a finite sequence of transitions of P1 of the form s0 = s 1
a1 0

E

s

a2 1

E

...

an

E

s

n

(n 0)

(4.4)

58


(the case n = 0 corresponds to the empty sequence of transitions (4.4), in which sn = s0 ). 1 The sequence (4.4) can be considered as an initial phase of execution of the process P1 , and every trace from T rsn (P1 ) can be considered as a continuation of this phase. The processes P1 and P2 are said to be trace equivalent, if · for each initial phase (4.4) of an execution of the process P1 there is an initial phase of an execution of the process P2 s0 = s 2
a1 0

E

s

a2 1

E

...

an

E

s

n

(4.5)

with the following properties: ­ (4.5) has the same trace a1 . . . an , as (4.4), and ­ at the end of (4.5) there is the same choice of further execution that at the end of (4.4), i.e. T rsn (P1 ) = T rsn (P2 ) (4.6)

· and a symmetrical condition holds: for each sequence of transitions of P2 of the form (4.5) there is a sequence of transitions of P1 of the form (4.4), such that (4.6) holds. These conditions have the following disadvantage: they contain · unlimited sets of sequences of transitions of the form (4.4) and (4.5), and · unlimited sets of traces from (4.6). Therefore, checking of these conditions seems to be difficult even when the processes P1 and P2 are finite. There is a problem of finding of necessary and sufficient conditions of trace equivalency, that can be algorithmically checked for given processes P1 and P2 in the case when these processes are finite. Sometimes there is considered an equivalence between processes which is obtained from the trace equivalence by a replacement of condition (4.6) on the weaker condition: Act(sn ) = Act(sn ) where for each state s Act(s) denotes a set all actions a Act, such that there is a transition starting at s with the label a. 59


4.3

Strong equivalence

Another variant of the concept of equivalence of processes is strong equivalence. To define the concept of strong equivalence, we introduce auxiliary notations. After the process P = (S, s0 , R) (4.7) has executed its first action, and turn to a new state s1 , its behavior will be indistinguishable from a behavior of the process P
def

= (S , s1 , R )

(4.8)

having the same components as P , except of an initial state. We shall consider the diagram P
a

E

P

(4.9)

as an abridged notation of the statement that · P and P are processes of the form (4.7), and (4.8) respectively, and · R contains the transition s
0 a

E

s1 .

(4.9) can be interpreted as a statement that the process P can · execute the action a, and then · behave like the process P . A concept of strong equivalence is based on the following understanding of equivalence of processes: if we consider processes P1 and P2 as equivalent, then it must be satisfied the following condition: · if one of these processes Pi can ­ execute some action a Act, ­ and then behave like some process P · then the other process P
j i

(j {1, 2} \ {i}) also must be able

­ execute the same action a, 60


­ and then behave like some process Pj , which is equivalent to Pi . Thus, the desired equivalence must be a a binary relation µ on the set of all processes, the following properties. (1) If (P1 , P2 ) µ, and P
1 a

E

P

1

(4.10)

for some process P1 , then there is a process P2 , such that P and (P1 , P2 ) µ (4.12) (2) symmetric property: if (P1 , P2 ) µ, and for some process P2 (4.11) holds, then there is a process P1 , such that (4.10) and (4.12) hold. Denote by the symbol M a set of all binary relations, which possess the above properties. The set M is nonempty: it contains, for example, a diagonal relation, which consists of all pairs of the form (P, P ), where P is an arbitrary process. The question naturally arises: which of the relations from M can be used for a definition of strong equivalence? We suggest the most simple answer to that question: we will consider P1 and P2 as strongly equivalent if and only if there exists at least one relation µ M, which contains the pair (P1 , P2 ). Thus, we define the desired relation of strong equivalence on the set of all processes as the union of all relations from M. This relation is denoted by . It is not so difficult to prove that · M, and · is an equivalence relation, because ­ reflexivity of follows from the fact that the diagonal relation belongs to M, ­ symmetry of follows from the fact that if µ M, then µ
-1 a 2

E

P

2

(4.11)

M

­ transitivity of follows from the fact that if µ1 M and µ2 M, then µ1 µ2 M. 61


If processes P1 and P2 are strongly equivalent, then this fact is denoted by P1 P
2

It is easy to prove that if processes P1 and P2 are strongly equivalent they they are trace equivalent. To illustrate the concept of strong equivalence we give a couple of examples. 1. The processes
# ! " # "! ¡e a¡ ea ¡ e c

a
c ¡e b¡ ec ¡ e ¡ e

b

c
c

(4.13)

are not strongly equivalent, because they are not trace equivalent. 2. Processes
# "! # "! ¡e a¡ ea ¡ e c c

a
c ¡e b¡ eb ¡ e ¡ e

b

b

are strongly equivalent.

62


4.4
4.4.1

Criteria of strong equivalence
A logical criterion of strong equivalence
and are formulas from F m.

Let F m be a set of formulas defined as follows. · The symbols

· If F m, then ¬ F m. · If F m and F m, then F m. · If F m, and a Act, then a F m. Let P be a process, and F m. A value of the formula on the process P is an element P () of the set {0, 1} defined as follows. · P ( ) = 1, P () = 0 · P (¬) = 1 - P () · P ( ) = P () · P ( ) 1, if there is a process P : aE P P , P () = 1 · P ( a ) = 0, otherwise
def def def def def



A theory of the process P is a subset T h(P ) F m, defined as follows: T h(P ) = { F m | P () = 1} Theorem 1. Let P1 and P2 be finite processes. Then P1 P
2



T h(P1 ) = T h(P2 )

Pro of. Let P1 P2 . The statement that for each F m the equality P1 () = P2 () holds, can be proven by induction on the structure of . Prove the implication "". Suppose that T h(P1 ) = T h(P2 ) 63 (4.14)


Let µ be a binary relation on the set of all processes, defined as follows: µ = {(P1 , P2 ) | T h(P1 ) = T h(P2 )} We prove that µ satisfies the definition of strong equivalence. Let this does not hold, that is, for example, for some a Act (a) there is a process P1 , such that P
a 1 def

E

P

1

(b) but there is no a process P2 , such that P and T h(P1 ) = T h(P2 ). Condition (b) can be satisfied in two situations: 1. There is no a process P2 , such that (4.15) holds. 2. There exists a process P2 , such that (4.15) holds, but for each such process P2 T h(P1 ) = T h(P2 ) We show that in both these situations there is a formula F m, such that P1 () = 1, P2 () = 0
a 2

E

P

2

(4.15)

that would be contrary to assumption (4.14). 1. If the first situation holds, then we can take as the formula a 2. Assume that the second situation holds. Let P2,1 , . . . , P
2,n

.

be a list of all processes P2 satisfying (4.15). By assumption, for each i = 1, . . . , n, the inequality T h(P1 ) = T h(P2,i ) holds, i.e. for each i = 1, . . . , n there is a formula i , such that P1 (i ) = 1, P2,i (i ) = 0

In this situation, we can take as the formula a (1 . . . n ). 64


For example, let P1 and P2 be processes (4.13). As stated above, these processes are not strongly equivalent. The following formula can be taken as a justification of the statement that P1 P2 : = a(b
def

c

)

It is easy to prove that P1 () = 1 and P2 () = 0. There is a problem of finding for two given processes P1 and P2 a list of formulas of a smallest size 1 , . . . , n such that P1 P2 if and only if i = 1, . . . , n P1 (i ) = P2 (i )

4.4.2

A criterion of strong equivalence, based on the notion of a bisimulation

Theorem 2. Let P1 and P2 be a couple of processes of the form Pi = (Si , s0 , Ri ) i (i = 1, 2)

Then P1 P2 if and only if there is a relation µ S1 â S satisfying the following conditions. 0. (s0 , s0 ) µ. 12 1. For each pair (s1 , s2 ) µ and each transition from R1 of the form s
a 1 2

E

s

1

there is a transition from R2 of the form s such that (s1 , s2 ) µ. 65
a 2

E

s

2


2. For each pair (s1 , s2 ) µ and each transition from R2 of the form s
a 2

E

s

2

there is a transition from R1 of the form s such that (s1 , s2 ) µ. A relation µ, satisfying these conditions, is called a bisimulation (BS) between P1 and P2 .
a 1

E

s

1

4.5

Algebraic prop erties of strong equivalence

Theorem 3. Strong equivalence is a congruence, i.e., if P1 P2 , then · for each a Act a.P1 a.P · for each process P · for each process P
2

P 1 + P P2 + P P1 |P P2 |P

· for each L N ames P1 \ L P2 \ L · for each renaming f P 1 [ f ] P2 [ f ]

Pro of. As it was stated in section 4.4.2, the statement P1 P
2

is equivalent to the statement that there is a BS µ between P1 and P2 . Using this µ, we construct a BS for justification of each of the foregoing relationships. · Let s0 and s (1) tively.
0 (2)

be initial states of the processes a.P1 and a.P2 respec-

Then the relation {(s0 , s0 )} µ (1) (2) is a BS between a.P1 and a.P2 . 66


· Let ­ s0 and s (1) and
0 (2)

be initial states of P1 + P and P2 + P respectively,

­ S be a set of states of the process P . Then ­ the relation {(s0 , s0 )} µ I dS (1) (2) is a BS between P1 + P and P2 + P , and ­ the relation {((s1 , s), (s2 , s)) | (s1 , s2 ) µ, q S } is a BS between P1 |P and P2 |P . · The relation µ is a BS ­ between P1 \ L and P2 \ L, and ­ between P1 [f ] and P2 [f ]. Theorem 4. Each process P = (S, s0 , R) has the following properties. 1. P + 0 P 2. P + P P Pro of. 1. Let s0 be an initial state of the process P + 0. 0 Then the relation {(s0 , s0 )} I dS 0 is a BS between P + 0 and P .

67


2. By definition of the operation "+", processes in the left side of the statement P + P P should be considered as two disjoint isomorphic copies of P of the form P where S(i) = {s
(i) (i)

= (S(i) , s0i) , R(i) ) (i = 1, 2) (

| s S }.

Let s0 be an initial state of the process P + P . 0 Then the relation {(s0 , s0 )} {(s(i) , s) | s S, i = 1, 2} 0 is a BS between P + P and P . Below for · each process P = (S, s0 , R), and · each state s S we denote by P (s) the process (S, s, R), which is obtained from P by a replacement of an initial state. Theorem 5. Let P = (S, s0 , R) be a process, and a set of all its transitions, starting from s0 , has the form {s Then P a1 .P1 + . . . + an .P where for each i = 1, . . . , n Pi = P (si ) = (S, si , R) Pro of. (4.16) holds because there is a BS between left and right sides of (4.16). For a construction of this BS we replace all the processes Pi in the right side of (4.16) on their disjoint copies, i.e. we can consider that for each i = 1, . . . , n 68
def def n 0 ai

E

si | i = 1, . . . , n} (4.16)


· the process Pi has the form Pi = (S(i) , si i) , R(i) ) ( where all the sets S(1) , . . . , S
(n)

are disjoint, and
(i)

· a corresponding bijection between S and S a state, denoted by the symbol s(i) .

maps each state s S to

Thus, we can assume that each summand ai .Pi in the right side of (4.16) has the form 9 6
# ! "

s

0 (i)

a

i

E si ( i) 8

P

i

7

and sets of states of these summands are pairwise disjoint. According to the definition of the operation +, the right side of (4.16) has the form
9 B s(1) ¨ ¨¨ a1 ¨ 8 ¨¨
1

6

P

1

# ¨¨

7

0 r " ! rr

s

0

...
9 6

arrr n rr j sn P (n)
8

n

7

BS between left and right sides of (4.16) has be defined, for example, as the relation {(s0 , s0 )} {(s, s(i) ) | s S, i = 1, . . . , n} 0 Theorem 6 (expansion theorem). Let P be a process of the form P = P1 | . . . | P 69
n

(4.17)


where for each i {1, . . . , n} the process Pi has the form
ni

Pi =
j =1

aij . Pij

(4.18)

Then P is strongly equivalent to a sum of 1. all processes of the form aij . P1 | . . . | P 2. and all processes of the form . where 1 i < j n, P1 | . . . | Pi-1 | Pik | Pi+1 | . . . . . . | Pj -1 | Pj l | Pj +1 | . . . | Pn aik , aj l = , and aik = aj l . (4.20)
i-1

| Pij | P

i+1

| ... |P

n

(4.19)

Pro of. By theorem 5, P is strongly equivalent to a sum, each summand of which corresponds to a transition starting from the initial state s0 of the process P . For each transition of P of the form s
0 a

E

s

this sum contains the summand a.P (s). According to (4.18), for each i = 1, . . . , n the process Pi has the form
9 B ¨ si1 ¨ ai 1 ¨ ¨ ¨ 8 ¨¨ # ¨
i r " ! r rr ainir 0

6

P

i1

7

s

0

...
6

9 rr j r s0 Pini ini 8

7

70


where s0 , s01 , . . . , s i i

0 in

i

are initial states of the processes Pi , P i 1 , . . . , P
in
i

respectively. Let · Si be a set of states of the process Pi , and · Sij (where j = 1, . . . , ni ) be a set of states of the process Pij . We can assume that Si is a disjoint union of the form Si = {s0 } Si1 . . . Sini i (4.21)

According to the description of a process of the form (4.17), which is presented in item 2 of section 3.7, we can assume that components of P have the following form. · A set of states of the process P has the form S1 â . . . â Sn · An initial state s0 of P is a list ( s0 , . . . , s 0 ) 1 n · Transitions of P , starting from its initial state, are as follows. ­ Transitions of the form s
0 aij

(4.22)

E

( s0 , . . . , s 1

0 i-1

, s0 , s0+1 , . . . , s0 ) ij i n

(4.23)

­ Transitions of the form s
0

E

s0 , . . . , s0-1 , s0 , s0+1 , . . . 1 i ik i . . . s0-1 , s0l , s0+1 , . . . , s0 j j j n aik , aj l = , and aik = aj l .

(4.24)

where 1 i < j n,

Thus, there is an one-to-one correspondence between · the set of transitions of the process P , starting from s0 , and 71


· the set of summands of the form (4.19) and (4.20). For the proof of theorem 6 it is enough to prove that · For each i = 1, . . . , n, and each j = 1, . . . , ni the following equivalence holds: P (s0 , . . . , s0-1 , s0 , s0+1 , . . . , s0 ) 1 i ij i n (4.25) P1 | . . . | Pi-1 | Pij | Pi+1 | . . . | Pn · for ­ any i, j , such that 1 i < j n, and ­ any k = 1, . . . , ni , l = 1, . . . , n the following equivalence holds: P s0 , 1 ... P1 ... . . . , s0-1 , s0 , s0+1 , . . . i ik i s0-1 , s0l , s0+1 , . . . , s0 j j j n | . . . | Pi-1 | Pik | Pi+1 | . . . | Pj -1 | Pj l | Pj +1 | . . . | Pn
j

(4.26)

We shall prove only (4.25) ((4.26) can be proven similarly). A set of states of the process P1 | . . . | P has the form S1 â . . . â S
i-1 i-1

| Pij | P

i+1

| ... |P

n

(4.27)

â Sij â Si

+1

â . . . â Sn

(4.28)

(4.21) implies that Sij Si , i.e. set (4.28) is a subset of set (4.22) of states of the process P (s0 , . . . , s 1
0 i-1

, s0 , s0+1 , . . . , s0 ) ij i n

(4.29)

We define the desired BS µ between processes (4.27) and (4.29) as the diagonal relation def µ = {(s, s) | s (4.28)} Obviously, · a pair of initial states of processes (4.27) and (4.29) belongs to µ, 72


· each transition of the process (4.27) is also a transition of the process (4.29), and · if a start of some transition of the process (4.29) belongs to the subset (4.28), then the end of this transition also belongs to the subset (4.28) (to substantiate this claim we note that for each transition of Pi , if its start belongs to Sij , then its end also belongs to Sij ). Thus, µ is a BS, and this proves the claim (4.25). The following theorem is a strengthening of theorem 6. To formulate it, we will use the following notation. If f : N ames N ames is a renaming, then the symbol f denotes also a mapping of the form f : Act Act defined as follows. · N ames f (!) = f ()!, f (?) = f ()? · f ( ) = Theorem 7. Let P be a process of the form P= where for each i {1, . . . , n}
ni def def def

P1 [ f 1 ] | . . . | Pn [ f n ] \ L

Pi
j =1

aij . Pij

Then P is strongly equivalent to a sum of 1. all processes of the form P1 [ f 1 ] | . . . fi (aij ). . . . | Pi-1 [fi-1 ] | Pij [fi ] | P . . . | Pn [fn ] where aij = or name(fi (aij )) L, and 73

i+1



[f

i+1

]| ... \ L


2. all processes of the form


.



P1 [ ... ... ...

f | | |

1

]| ... Pi-1 [fi-1 ] | Pik [fi ] | Pi+1 [fi+1 ] | . . . Pj -1 [fj -1 ] | Pj l [fj ] | Pj +1 [fj +1 ] | . . P n [f n ]





.



\L



where 1 i < j n,

aik , aj l = , and fi (aik ) = fj (aj l ).

Pro of. This theorem follows directly from · the previous theorem, · theorem 3, · properties 6, 9, 10, 16 and 17 from section 3.7, and · the first assertion from theorem 4.

4.6
4.6.1

Recognition of strong equivalence
Relation µ(P1 , P2 )
Pi = (Si , s0 , Ri ) i

Let P1 , P2 be a couple of processes of the form (i = 1, 2)

Define an operator on the set of all relations from S1 to S2 , that maps each relation µ S1 â S2 to the relation µ S1 â S2 , defined as follows:


µ=

def

(s1 , s2 ) S1 â S2

a Act a s1 S1 : (s1 s (s2 s2 S2 : (s1 a s2 S2 : (s2 s (s1 s1 S1 : (s1

1

) a s , s2 ) 2) a s , s2 )

R1 2 ) R2 µ R2 1 ) R1 µ



74


It is easy to prove that for each µ S1 â S µ satisfies conditions 1 and 2 from the definition of a BS Consequently, µ is a BS between P1 and P
2

2



µµ



(s0 , s0 ) µ 12 µµ

It is easy to prove that the operator is monotone, i.e. if µ1 µ2 , then µ1 µ2 . Let µ
max

be a union of all relations from the set {µ S1 â S2 | µ µ } (4.30)

Note that the relation µ µ (4.30) from · the inclusion µ (

max

belongs to the set (4.30), since for every , and

µ) = µ

max

µ(4.30)

· monotonicity of it follows that for each µ (4.30) µµ µ So µ
max max

=
µ(4.30)

µµ

max

, i.e. µ

max

(4.30).

Note that the following equality holds µ because · the inclusion µ
max max



max

µ

max

, and

· monotonicity of

75


imply the inclusion µ
max

µ

max max

i.e. µmax (4.30), whence, by virtue of maximality of µ inclusion µmax µmax Thus, the relation µ
max

, we get the

is

· a greatest element of the partially ordered set (4.30) (where a partial order is the relation of inclusion), and · a greatest fixed point of the operator . We shall denote this relation by µ(P1 , P2 ) From theorem 2 it follows that P1 P
2

(4.31)



(s0 , s0 ) µ(P1 , P2 ) 12

From the definition of the relation µ(P1 , P2 ) it follows that this relation consists of all pairs (s1 , s2 ) S1 â S2 , such that P1 (s1 ) P2 (s2 ) The relation µ(P1 , P2 ) can be considered as a similarity measure between P1 and P2 .

4.6.2

A p olynomial algorithm for recognizing of strong equivalence
Pi = (Si , s0 , Ri ) i (i = 1, 2)

Let P1 and P2 be processes of the form

If the sets S1 and S2 are finite, then the problem of checking of statement P1 P
2

(4.32)

obviously is algorithmically solvable: for example, you can iterate over all relations µ S1 â S2 and for each of them verify conditions 0, 1 and 2 from the definition of BS. The algorithm finishes its work when 76


· it is found a relation µ S1 â S2 which satisfies conditions of 0, 1 and 2 from the definition of BS, in this case the algorithm gives the answer P1 P or · all relations µ S1 â S2 are checked, and none of them satisfy conditions of 0, 1 and 2 from the definition of BS. In this case, the algorithm gives the answer P1 P 2 If P1 P2 , then the above algorithm will give the answer after checking of all relations from S1 to S2 , the number of which is 2|
S1 |·|S2 | 2

(where for every finite set S we denote by |S | a number of elements of S ), i.e. this algorithm has exponential complexity. The problem of checking P1 P2 can be solved by more efficient algorithm, which has polynomial complexity. To construct such an algorithm, we consider the following sequence of relations from S1 to S2 : {µi | i 1} where µ1 = S1 â S2 , and i 1 µ From · the inclusion µ1 µ2 , and · the monotonicity of the operator it follows that µ2 = µ1 µ2 = µ µ3 = µ2 µ3 = µ etc.
3 4 def def i+1

(4.33)

= µi .

Thus, the sequence (4.33) is monotone: µ1 µ2 . . .

77


Since all members of sequence (4.33) are subsets of the finite set S1 â S2 , then this sequence can not decrease infinitely, it will be stabilized at some member, i.e. there is an index i 1, such tha µi = µ
i+1



i+2

= ...

We prove that the relation µi (where i is the above index) coincides with the relation µ(P1 , P2 ). · Since µi = µ
i+1

= µi , i.e. µi is a fixed point of the operator , then µi µ(P1 , P2 ) (4.34)

since µ(P1 , P2 ) is the largest fixed point of the operator . · For each j 1 the inclusion µ(P1 , P2 ) µ holds, because ­ inclusion (4.35) holds for j = 1, and ­ if inclusion (4.35) holds for some j , then on the reason of monotonicity of the operator , the following equalities hold: µ(P1 , P2 ) = µ(P1 , P2 ) µj = µ i.e. inclusion (4.35) holds for j + 1. In particular, (4.35) holds for j = i. The equality µi = µ(P1 , P2 ) (4.36) follows from (4.34) and (4.35) for j = i. Thus, the problem of checking of the statement P1 P2 can be solved by · finding a first member µi of sequence (4.33), which satisfies the condition µi = µi+1 , and · checking the condition (s0 , s0 ) µ 12 78
i j +1 j

(4.35)

(4.37)


The algorithm gives the answer P1 P
2

if and only if (4.37) holds. For a calculation of terms of the sequence (4.33) the following algorithm can be used. This algorithm computes a relation µ for a given relation µ S1 â S2 . µ := lo op for each (s1 , s2 ) µ include := aE s1 lo op for each s1 , a : s1 found := aE lo op for each s2 : s2 s2 found := found (s1 , s2 ) µ end of lo op include := include found end of lo op aE s2 lo op for each s2 , a : s2 found := aE lo op for each s1 : s1 s1 found := found (s1 , s2 ) µ end of lo op include := include found end of lo op if include then µ := µ {(s1 , s2 )} end of lo op Note that this algorithm is correct only when µ µ (which occurs in the case when this algorithm is used to calculate terms of the sequence (4.33)). In a general situation the outer loop must have the form lo op for each (s1 , s2 ) S1 â S2 Estimate a complexity of the algorithm. Let A be the number max(|Act(P1 )|, |Act(P2 )|) + 1 79


· The outer loop does no more than |S1 | · |S2 | iterations. · Both loops contained in the external loop make max |S1 | · |S2 | · A iterations. Therefore, a complexity of this algorithm can be evaluated as O(|S1 |2 · |S2 |2 · A) Since for a calculation of a member µi of sequence (4.33), on which (4.33) is stabilized, we must calculate not more than |S1 | · |S2 | members of this sequence, then, consequently, the desired relation µi = µ(P1 , P2 ) can be calculated during O(|S1 |3 · |S2 |3 · A)

4.7
4.7.1

Minimization of pro cesses
Prop erties of relations of the form µ(P, P )

Theorem 8. def For each process P = (S, s0 , R) the relation µ(P, P ) is an equivalence. Pro of. 1. Reflexivity of the relation µ(P, P ) follows from the fact that the diagonal relation I dS = {(s, s) | s S } satisfy conditions 1 and 2 from the definition of BS, i.e. I dS (4.30). 2. Symmetry of the relation µ(P, P ) follows from the fact that if a relation µ satisfies conditions 1 and 2 from the definition of BS, then the inverse relation µ-1 also satisfies these conditions, that is, if µ (4.30), then µ
-1

(4.30).

80


3. Transitivity of the relation µ(P, P ) follows from the fact that the product µ(P, P ) µ(P, P ) satisfies conditions 1 and 2 from the definition of BS, i.e. µ(P, P ) µ(P, P ) µ(P, P ) Let P be a process, whose components have the following form. · Its states are equivalence classes of the set S of states of P , corresponding to the equivalence µ(P, P ). · Its initial state is the class [s0 ], which contains the initial state s0 of P . · A set of its transitions consists of all transitions of the form [s 1 ] where s
a 1 a

E

[s 2 ]

E

s2 is an arbitrary transition from R.

The process P is said to be a factor-pro cess of the process P with respect to the equivalence µ(P, P ). Theorem 9. For each process P the relation µ = { (s, [s]) | s S } is BS between P and P . Pro of. Check the properties 0, 1, 2 from the definition of BS for the relation µ. Property 0 holds by definition of an initial state of the process P . Property 1 holds by definition of a set of transitions of P . Let us prove property 2. Let P contains a transition [s ]
a def

E

[s ]

Prove that there is a transition in R of the form s
a

E

s

81


such that (s , [s ]) µ, i.e. [s ] = [s ], i.e. (s , s ) µ(P, P ) From the definition of a set of transitions of the process P it follows that R contains a transition of the form s
a 1

E

s

1

(4.38)

where [s1 ] = [s] and [s1 ] = [s ], i.e. (s1 , s) µ(P, P ) and (s1 , s ) µ(P, P ) Since µ(P, P ) is a BS, then from · (4.38) R, and · (s1 , s) µ(P, P ) it follows that R contains a transition of the form s
a

E

s

1

(4.39)

where (s1 , s1 ) µ(P, P ). Since µ(P, P ) is transitive, then from (s1 , s1 ) µ(P, P ) and (s1 , s ) µ(P, P ) it follows that (s1 , s ) µ(P, P ) Thus, as the desired state s it can taken the state s1 . From theorem 9 it follows that for each process P P P


82


4.7.2

Minimal pro cesses with resp ect to

A process P is said to be minimal with resp ect to , if · each its state is reachable, and · µ(P, P ) = I dS (where S is a set of states of P ). Below minimal processes with respect to are called simply minimal processes. Theorem 10. Let the processes P1 and P2 minimal, and P1 P2 . Then P1 and P2 are isomorphic. Pro of. Suppose that Pi (i = 1, 2) has the form (Si , s0 , Ri ), and let µ S1 â S i be BS between P1 and P2 . Since µ( - 1) is also BS, and composition of BSs is BS, then · µµ
-1

2

is BS between P

1

and P1 , and ·µ
-1

µ is BS between P2 and P

2

whence, using definition of the relations µ(Pi , Pi ), and the definition of a minimal process, we get the inclusions µ µ-1 µ(P1 , P1 ) = I dS1 µ-1 µ µ(P2 , P2 ) = I dS2 (4.40)

Prove that the relation µ is functional, i.e. for each s S1 there is a unique element s S2 , such that (s, s ) µ. · If s = s0 , then we define s = s0 . 1 2 · If s = s0 then, since every state in P1 is reachable, then there is a path 1 in P1 of the form a1 an E ... Es s0 1
def

83


Since µ is BS, then there is a path in P2 of the form s and (s, s ) µ. Thus, in both cases there is an Let us prove the uniqueness If there is an element s which implies (s , s element s S2 , such that (s, s ) µ. of the element s with the property (s, s ) µ. S2 , such that (s, s ) µ, then (s , s) µ-1 , )µ
-1 0 2 a1

E

...

an

E

s

µ = I dS2

so s = s . For similar reasons, the relation µ-1 is also functional. From conditions (4.40) it is easy to deduce bijectivity of the mapping, which corresponds to the relation µ. By the definition of BS, this implies that P1 and P2 are isomorphic. Theorem 11. Let · a process P2 is obtained from a process P1 by removing of unreachable states, and · P3 = (P2 ) . Then the process P3 is minimal, and P1 P2 P
3 def

Pro of. Since each state of P2 is reachable, then from the definition of transitions of a factor-process, it follows that each state of P3 is also achievable. Now, we prove that µ(P3 , P3 ) = I dS3 (4.41) i.e. suppose that (s , s ) µ(P3 , P3 ), and prove that s = s . From the definition of a factor-process it follows that there are states s1 , s2 S2 , such that s = [ s1 ] s = [ s2 ] 84


where [·] denotes an equivalence class with respect to µ(P2 , P2 ). From theorem 9 it follows that (s1 , s ) µ(P2 , P3 ) (s , s2 ) µ(P3 , P2 ) Since a composition of BSs is also BS, then the composition µ(P2 , P3 ) µ(P3 , P3 ) µ(P3 , P2 ) is BS between P2 and P2 , so (4.42) µ(P2 , P2 ) Since (s1 , s2 ) (4.42), then, in view of (4.43), we get: s = [s1 ] = [s2 ] = s In conclusion, we note that · the statement P1 P2 is obvious, and · the statement P2 P3 follows from theorem 9. (4.43) (4.42)

4.7.3

An algorithm for minimizing of finite pro cesses

The algorithm described in section 4.6.2 can be used to solve the problem of minimizing of finite pro cesses, which has the following form: for a given finite process P build a process Q with the smallest number of states, which is strongly equivalent to P . To build the process Q, first there is constructed a process P , obtained from P by removing of unreachable states. The process Q has the form P . A set of states of the process P can be constructed as follows. Let P has the form P = (S, s0 , R) Consider the sequence of subsets of the set S S0 S1 S2 . . . defined as follows. 85 (4.44)


· S 0 = { s0 } · for each i 0 the set S s S , such that
i+1

def

is obtained from Si by adding all states
a

s S, a Act : ( s

E

s ) R

Since S is finite, then the sequence (4.44) can not increase infinitely. Let Si be a member of the sequence (4.44), where this sequence is stabilized. It is obvious that · all states from Si are reachable, and · all states from S \ Si are unreachable. Therefore, a set of states of the process P is the set Si . Let S be a set of states of the process P . Note that for a computation of the relation µ(P , P ) it is necessary to calculate no more than |S | members of sequence (4.33), because · each relation in the sequence (4.33) is an equivalence (since if a binary relation µ on the set of states of a process is an equivalence, then the relation µ is also an equivalence), and · ­ each member of the sequence (4.33) defines a partitioning of the set S , and ­ for each i 1, if µi+1 = µi , then a partitioning corresponding to µi+1 is a refinement of a partitioning corresponding to µi , and it is easy to show that a number of such refinements is no more than |S |. Theorem 12. The process P has the smallest number of states among all finite processes that are strongly equivalent to P . Pro of. Let · P1 be a finite process, such that P1 P , and · P1 be a reachable part of P1 . 86


As it was established above, P1 P1 (P1 ) Since P P P and P P1 , then, consequently, P ( P1 )


(4.45)

As it was proved in theorem 11, the processes P and (P1 ) are minimal. From this and from (4.45), by virtue of theorem 10 we get that the processes P and (P1 ) are isomorphic. In particular, they have same number of states. Since · a number of states of the process (P1 ) does not exceed a number of states of the process P1 (since states of the process (P1 ) are classes of a partitioning of the set of states of the process P1 ), and · a number of states the process P1 does not exceed a number of states of the process P1 (since a set of states of the process P1 is a subset of a set of states of the process P1 ) then, consequently, a number of states of the process P does not exceed a number of states of the process P1 .

4.8
4.8.1

Observational equivalence
Definition of observational equivalence

Another variant of the concept of equivalence of processes is observational equivalence. This concept is used in those situations where we consider the internal action as negligible, and consider two traces as the same, if one of them can be obtained from another by insertions and/or deletions of internal actions . For a definition of the concept of observable equivalence we introduce auxiliary notations. Let P and P be processes. 1. The notation P means that 87



E

P

(4.46)


· either P = P · or there is a sequence of processes P1 , . . . , P such that ­ P1 = P , Pn = P ­ for each i = 1, . . . , n - 1 P
i n

(n 2)

E

P

i+1

(4.46) can be interpreted as the statement that the process P may imperceptibly turn into a process P . 2. For every action a Act \ { } the notation P
a

E

P

(4.47)

means that there are processes P1 and P2 with the following properties: P



E

P1 ,

P

a 1

E

P2 ,

P2





E

P

(4.47) can be interpreted as the statementthat the process P may · execute a sequence of actions, such that ­ the action a belongs to this sequence, and ­ all other actions in this sequence are internal and then · turn into a process P . If (4.47) holds, then we say that the process P may · observably execute the action a, and then · turn into a process P . The concept of observational equivalence is based on the following understanding of equivalence of processes: if we consider processes P1 and P2 as equivalent, then they must satisfy the following conditions. 88


1.

· If one of these processes Pi may imperceptibly turn into some process Pi , · then another process Pj (j {1, 2} \ {i}) also must be able imperceptibly turn into some process Pj , which is equivalent to Pi .

2.

· If one of these processes Pi may ­ observable execute some action a Act \ { }, and then ­ turn into a process Pi · then the other process P
j

(j {1, 2} \ {i}) must be able

­ observably execute the same action a, and then ­ turn into a process Pj , which is equivalent to Pi . Using notations (4.46) and (4.47), the above informally described concept of observational equivalence can be expressed formally as a binary relation µ on the set of all processes, which has the following properties. (1) If (P1 , P2 ) µ, and for some process P P
1 1

E

P

1

(4.48)

then there is a process P2 , such that P and (P1 , P2 ) µ (2) symmetric property: If (P1 , P2 ) µ, and for some process P P
2 2 2


E

P

2

(4.49)

(4.50)

E

P

2

(4.51)

then there is a process P1 , such that P and (4.50).
1


E

P

1

(4.52)

89


(3) If (P1 , P2 ) µ, and for some process P P
a 1

1

E

P

1

(4.53)

then there is a process P2 , such that P and (4.50). (4) symmetric property: If (P1 , P2 ) µ, and for some process P P
a 2 2 a 2

E

P

2

(4.54)

E

P

2

(4.55)

then there is a process P1 , such that P and (4.50). Let M be a set of all binary relations on the set of processes, which have the above properties. The set M is not empty: it contains, for example, the diagonal relation, which consists of all pairs (P, P ), where P is an arbitrary process. As in the case of strong equivalence, the natural question arises about what kind of a relationship, within the set M , can be used for a definition of the concept of observational equivalence. Just as in the case of strong equivalence, we offer the following answer to this question: we will consider P1 and P2 as observationally equivalent if and only if there is a relation µ M , that contains the pair (P1 , P2 ), i.e. we define a relation of observational equivalence on the set of all processes as the union of all relations fromM . This relation is denoted by the symbol . It is easy to prove that · M , · is an equivalence relation, because ­ reflexivity of follows from the fact that the diagonal relation belongs to M , 90
a 1

E

P

1

(4.56)


­ symmetry of follows from the fact that if µ M , then µ M

-1



­ transitivity of follows from the fact that if µ1 M and µ2 M , then µ1 µ2 M . If processes P1 and P2 are observationally equivalent, then this fact is indicated by P1 P2 It is easy to prove that if processes P1 and P2 are strongly equivalent, then they are observationally equivalent.

4.8.2

Logical criterion of observational equivalence

A logical criterion of observational equivalence is similar to the analogous criterion from section 4.4.1. In this criteria it is used the same set F m of formulas. The notion of a value of a formula on a process differs from the analogous notion in section 4.4.1 only for formulas of the form a : · a value of the formula on the process P is equal to


1, if there is a process P : E P , P () = 1 P 0, otherwise · a value of the formula a (where a = ) on P is equal to


1, if there is a process P : a E P , P () = 1 P 0, otherwise For each process P the notation T h (P ) denotes a set of all formulas which have a value 1 on the process P (with respect to the modified definition of the notion of a value of a formula on a process). Theorem 13 . Let P1 and P2 be finite processes. Then P1 P
2



T h (P1 ) = T h (P2 ) 91


As in the case of , there is a problem of finding for two given processes P1 and P2 a list of formulas of a smallest size 1 , . . . , such that P1 P2 if and only if i = 1, . . . , n P1 (i ) = P2 (i )
n

Using theorem 13, we can easily prove that for each process P Note that, · according to (4.57), the following statement holds: 0 . 0 · however, the statement 0 + a.0 . 0 + a.0 (where a = ) (4.58) P .P (4.57)

does not hold, what is easy to see by considering the graph representation of left and right sides of (4.58):
# ! " # "! ¸g ¸ g ¸ ga ¸ g ¸ g ¸ g

a
c

A formula, which takes different values on these processes, may have, for example, the following form: ¬¬a 92


Thus, the relation is not a congruence, as it does not preserve the operation +. Another example: if a, b Act \ { } and a = b, then a.0 + b.0 .a.0 + .b.0 although a.0 .a.0 and b.0 .b.0. A graph representation of these processes has the form
# ! " e eb ¡ e # ! " e e ¡ e

a ¡¡

¡¡







c

a
c

b

The fact that these processes are not observationally equivalent is substantiated by the formula ¬a

4.8.3

A criterion of observational equivalence based on the concept of an observational BS

For the relation there is an analog of the criterion based on the concept of BS (theorem 2 in section 4.4.2). For its formulation we shall introduce auxiliary notations. Let P = (S, s0 , R) be a process, and s1 , s2 be a pair of its states. Then · the notation s means that ­ either s = s ,



E

s

93


­ or there is a sequence of states s1 , . . . , s
i n

(n 2)

such that s1 = s, sn = s , and i = 1, . . . , n - 1 (s · the notation s



E

s

i+1

) R

a

E

s
a

(where a = )
2


means that there are states s1 and s2 , such that s
E

s1 ,

s

1

E

s2 ,

s

E

s.

Theorem 14 . Let P1 and P2 be processes of the form Pi = (Si , s0 , Ri ) i (i = 1, 2)

Then P1 P2 if and only if there is a relation µ S1 â S satisfying the following conditions. 0. (s0 , s0 ) µ. 12 1. For each pair (s1 , s2 ) µ and each transition from R1 of the form s
1 2

E

s

1

there is a state s2 S2 , such that s and (s1 , s2 ) µ
2 2


E

s

2

(4.59)

2. For each pair (s1 , s2 ) µ and each transition from R2 of the form s
E

s

2

there is a state s1 S1 , such that s and (4.59). 94
1


E

s

1


3. For each pair (s1 , s2 ) µ and each transition from R1 of the form s
a 1

E

s

1

(a = )

there is a state s2 S2 , such that s and (4.59). 4. For each pair (s1 , s2 ) µ and each transition from R2 of the form s
a 2 a 2

E

s

2

E

s

2

(a = )

there is a state s1 S1 , such that s and (4.59). A relation µ, satisfying these conditions, is called an observational BS (OBS)between P1 and P2 .
a 1

E

s

1

4.8.4

Algebraic prop erties of observational equivalence

Theorem 15. The relation of observational equivalence preserves all operations on processes except for the operation +, i.e. if P1 P2 , then · for each a Act a.P1 a.P · for each process P
2

P1 |P P2 |P

· for each L N ames P1 \ L P2 \ L · for each renaming f P 1 [ f ] P2 [ f ]

Pro of. As it was established in section 4.8.3, the statement P1 P2 is equivalent to the following statement: there is an OBS µ between P1 and P2 . Using this µ, we construct OBSs for justification of each of the foregoing statements. 95


· Let s0 and s (1) tively.

0 (2)

be initial states of the processes a.P1 and a.P2 respec-

Then the relation {((s1 , s), (s2 , s)) | (s1 , s2 ) µ, q S } is an OBS between P1 |P and P2 |P . · Let S be a set of states of the process P . Then the relation {((s1 , s), (s2 , s)) | (s1 , s2 ) µ, q S } is an OBS between P1 |P and P2 |P . · the relation µ is an OBS ­ between P1 \ L and P2 \ L, and ­ between P1 [f ] and P2 [f ].

4.8.5

Recognition of observational equivalence and minimization of pro cesses with resp ect to

The problems of 1. recognition for two given finite processes, whether they are observationally equivalent, and 2. construction for a given finite process P such a process Q, that has the smallest number of states among all processes, which are observationally equivalent to P can be solved on the base of a theory that is analogous to the theory contained in sections 4.6 and 4.7. We will not explain in detail this theory, because it is analogous to the theory for the case . In this theory, for any pair of processes Pi = (Si , s0 , Ri ) i (i = 1, 2)

also it is determined an operator on relations from S1 to S2 , that maps each relation µ S1 â S2 to the relation µ , such that µ satisfies conditions 1, 2, 3, 4 from the definition of OBS 96 µµ



In particular, µ is OBS between P1 and P
2



(s0 , s0 ) µ 12 µ µ

Let µ (P1 , P2 ) be a union of all relations from the set {µ S1 â S2 | µ µ } (4.60)

The relation µ (P1 , P2 ) is the greatest element (with respect to an inclusion) of the set (4.60), and has the property P1 P
2



(s0 , s0 ) µ (P1 , P2 ) 12

From the definition of the relation µ (P1 , P2 ) follows that it consists of all pairs (s1 , s2 ) S1 â S2 , such that P1 (s1 ) P2 (s2 ) The relation µ (P1 , P2 ) can be considered as another similarity measure between P1 and P2 . These is a polynomial algorithm of a computation of the relation µ (P1 , P2 ). This algorithm is similar to the corresponding algorithm from section 4.6.2. For constructing of this algorithm it should be considered the following consideration. For checking the condition s



E

s

(where s, s are states of a process P ) it is enough to analyze sequences of transitions of the form s


E

s

1

E

s

2

E

...

length of which does not exceed a number of states of the process P .

4.8.6

Other criteria of equivalence of pro cesses

For proving that processes P1 and P2 are strongly equivalent or observationally equivalent, the following criteria can be used. In some cases, use of these criteria for proving of an appropriate equivalence between P1 and P2 is much easier than all other methods. A binary relation µ on the set of processes is said to be 97


· BS (mod ), if µ ( µ ) · OBS (mod ), if µ ( µ ) · OBS (mod ), if µ ( µ ) It is easy to prove that · if µ is BS (mod ), then µ , and · if µ is OBS (mod or mod ), then µ . Thus, to prove P1 P2 or P1 P2 it is enough to find a suitable · BS (mod ), or · OBS (mod or mod ) respectively, such that (P1 , P2 ) µ

4.9
4.9.1

Observational congruence
A motivation of the concept of observational congruence

As stated above, a concept of equivalence of processes can be defined not uniquely. In the previous sections have already been considered different types of equivalence of processes. Each of these equivalences reflects a certain point of view on what types of a behavior should be considered as equal. In addition to these concepts of equivalence of processes, it can be determined, for example, such concepts of equivalence, that · take into account a duration of an execution of actions, i.e., in particular, one of conditions of equivalence of processes P1 and P2 can be as follows: ­ if one of these processes Pi may, within a some period of time imperceptibly turn into a process Pi ,

98


­ then the other process Pj (j {1, 2} \ {i}) must be able for approximately the same amount of time imperceptibly turn into a process Pj , which is equivalent to Pi (where the concept of "approximately the same amount of time" can be clarified in different ways) · or take into account the property of fairness, i.e. processes can not be considered as equivalent, if ­ one of them is fair, and ­ another is not fair where one of possible definitions of fairness of processes is as follows: a process is said to be fair if there is no an infinite sequence of transitions of the form E E E s1 s2 ... s0 such that the state s0 is reachable, and for each i 0 Act(si ) \ { } = Note that observational equivalence does not take into account the property of fairness: there are two processes P1 and P2 , such that ­ P1 P2 , but ­ P1 is fair, and P2 is not fair. For example ­ P1 = a.0, where a = , ­ P2 = a.0 | , where the process has one state and one transition with a label · etc. In every particular situation, a decision about which a concept of equivalence of processes is best used, essentially depends on the purposes for which this concept is intended. In this section we define another kind of equivalence of processes called an + observational congruence. This equivalence is denoted by . We define this equivalence, based on the following conditions that it must satisfy. 99


1. Processes that are equivalent with respect to , must be observationally equivalent. 2. Let · a process P is constructed as a composition of processes P1 , . . . , P that uses operations a., +, |, \L, [f ] (4.61)
n

+

· and we replace one of components of this composition (for example, the process Pi ), on other process Pi , which is equivalent to Pi . A process which is obtained from P by this replacement, must be equivalent to the original process P . It is easy to prove that an equivalence µ on the set of processes satisfies the above conditions if and only if


µ µ is a congruence with respect to operations (4.61)

(4.62)

There are several equivalences which satisfy conditions (4.62). For example, · ithe diagonal relation (consisting of pairs of the form (P, P )), and · strong equivalence () satisfy these conditions. Below we prove that among all equivalences satisfying conditions (4.62), there is greatest equivalence (with respect to inclusion). It is natural to + consider this equivalence as the desired equivalence ().

100


4.9.2

Definition of a concept of observational congruence

To define a concept of observational congruence, we introduce an auxiliary notation. Let P and P be a couple of processes. The notation P

+

E

P

means that there is a sequence of processes P1 , . . . , P such that · P1 = P , Pn = P , and
n

(n 2)

· for each i = 1, . . . , n - 1 P
i

E

P

i+1

We shall say that processes P1 and P2 are in a relation of observational congruence and denote this fact by P1 P if the following conditions hold. (0) P1 P2 . (1) If, a process P1 is such that P
1 + 2

E

P

1

(4.63)

then there is a process P2 , such that P and P1 P
2 2
+

E

P

2

(4.64)

(4.65)

101


(2) Symmetrical condition: if a process P2 is such that P
2

E

P

2

(4.66)

then there is a process P1 , such that P and (4.65). It is easy to prove that observational congruence is an equivalence relation.
1
+

E

P

1

(4.67)

4.9.3

Logical criterion of observational congruence

A logical criterion of observational congruence of two processes is produced by a slight modification of the logical criterion of observational equivalence from section 4.8.2. A set of formulas F m+ , which is used in this criterion, is an extension of the set of formulas F m from section 4.4.2. F m+ is obtained from F m by adding a modal connective + . The set F m+ is defined as follows. · Every formula from F m belongs to F m+ . · For every formula F m the string is a formula from F m+ . For every formula F m+ and every process P a value of on P is denoted by P () and is defined as follows. · If F m, then P () is defined as in section 4.8.2. · If =
+ +



, where F m, then 1, if there is a process P : + P E P , P ( ) = 1 P () = 0, otherwise
def



102


For each process P we denote by T h+ (P ) a set of all formulas F m+ , such that P () = 1. Theorem 16. Let P1 and P2 be finite processes. Then P1 P
+ 2



T h+ (P1 ) = T h+ (P2 )

As in the case of and , there is a problem of finding for two given processes P1 and P2 a list of formulas of a smallest size 1 , . . . , such that P1 P2 if and only if i = 1, . . . , n P1 (i ) = P2 (i )
+ n

Fm

+

4.9.4

Criterion of observational congruence based on the concept of observational BS

We shall use the following notation. Let · P be a process of the form (S, s0 , R), and · s1 , s2 be a pair of states from S . Then the notation s

+

E

s

means that there is a sequence of states s1 , . . . , s
i n

(n 2)

such that s1 = s, sn = s , and for each i = 1, . . . , n - 1 (s
E

s

i+1

) R

Theorem 17 . Let P1 , P2 be a pair of processes of the form Pi = (Si , s0 , Ri ) i
+

(i = 1, 2)

The statement P1 P2 holds if and only if there is a relation µ S1 â S satisfying the following conditions. 103
2


0. µ is an OBS between P1 and P2 (the concept of an OBS is described in section 4.8.3). 1. For each transition from R1 of the form s
0 1

E

s

1

there is a state s2 S2 , such that s and (s1 , s2 ) µ 2. For each transition from R2 of the form s
0 2 0 2
+

E

s

2

(4.68)

E

s

2

there exists a state s1 S1 , such that s and (4.68). Below the string OBS+ is an abbreviated notation of the phrase "an OBS satisfying conditions 1 and 2 of theorem 17".
0 1
+

E

s

1

4.9.5

Algebraic prop erties of observational congruence

Theorem 18. The observational congruence is a congruence with respect to all operaions + on processes, i.e. if P1 P2 , then · for each a Act a.P1 a.P · for each process P · for each process P
+ 2

P 1 + P P2 + P P1 |P P2 |P
+ +

+

· for each L N ames P1 \ L P2 \ L 104


· for each renaming f

P1 [f ] P 2 [f ]

+

Pro of. + As it was stated in section 4.9.4, the statement P1 P2 holds if and only if there is OBS+ µ between P1 and P2 . Using this µ, for each of the above statements we shall justify this statement by construction of corresponding OBS+ . · Let s0 and s (1) tively.
0 (2)

be initial states of the processes a.P1 and a.P2 respec-

Then the relation {(s0 , s0 )} µ (1) (2) is OBS+ between a.P1 and a.P · Let ­ s0 and s (1) and
0 (2) 2

be initial states of P1 + P and P2 + P respectively,

­ S be denote a set of states of the process P . Then the relation {(s0 , s0 )} µ I dS (1) (2) is OBS+ between P1 + P and P2 + P . · Let S be a set of states of the process P . Then the relation {((s1 , s), (s2 , s)) | (s1 , s2 ) µ, q S } is OBS+ between P1 |P and P2 |P . · The relation µ is OBS
+

­ between P1 \ L and P2 \ L, and ­ between P1 [f ] and P2 [f ]. Theorem 19.

105


For any processes P1 and P

2

P1 P

2





P 1 P2 + P1 .P + .P P 1

+

2 2

or or

Pro of. The implication "" follows from · the inclusion , and · the fact that for any process P Prove the implication " ". Suppose P1 P and
2 +

P .P

(4.69)

(4.70)
+

it is not true that P1 P

2

(4.71)

(4.71) can occur, for example, in the following case: there is a process P1 , such that E P1 P1 and there is no a process P2 P1 , + such that P2 E P2 (4.72)

(4.73)

We shall prove that in this case P1 .P
+ 2

According to the definition of observational congruence, we must prove that conditions (0), (1) and (2) from this definition are satisfied. (0) : P1 .P2 . This condition follows from (4.70) and (4.69).

106


(1) : if there is a process P1 such that P
1

E

P

1

(4.74)

then there is a process P2 P1 such that .P
2
+

E

P

2

(4.75)

From (4.70), (4.74), and from the definition of observational equivalence it follows that these is a process P2 P1 such that P (4.75) follows from .P
2 2


E

P

2

(4.76)

E

P2 and (4.76).

(2) : if there is a process P2 such that .P
2

E

P

2

(4.77)

then there is a process P1 P2 such that P
1
+

E

P

1

From the definition of the operation of prefix actions and from (4.77) we get the equality P2 = P 2 Thus, we must prove that for some process P1 P2 + the formula P1 E P1 holds (4.78)

Let P1 be a process that is referred in the assumption (4.72). From the assumption (4.70) we get there is a process P2 P1 , such that P2 E P2 (4.79)

Comparing (4.79) and (4.73), we get the equality P2 = P2 , i.e., we have proved (4.78). (4.71) may be true also on the reason that 107


· there is a process P2 , such that P

2

E

P2 , and

· there is no a process P1 P2 , such that P
1
+

E

P

1

In this case, by similar reasoning it can be proven that .P1 P
+ 2

Theorem 20. + The relation coincides with the relation {(P1 , P2 ) | P P 1 + P P2 + P } (4.80)

Pro of. + The inclusion (4.80) follows from the fact that · is a congruence (i.e., in particular, preserves the operation "+"), and · . Prove the inclusion (4.80) Let (P1 , P2 ) (4.80). Since for each process P the following statement holds P1 + P P2 + P then, setting in (4.81) P = 0, we get P1 + 0 P2 + 0 Since · for each process P the following statement holds: P +0P · and , furthermore, 108 (4.82)
def + + + +

(4.81)


then from (4.82) we get P1 P
+ 2

(4.83)

If it is not true that P1 P2 , then from (4.83) on the reason of theorem 19 we get that · either P1 .P2 , · or .P1 P
+ 2 +

Consider, for example, the case P1 .P
+ 2

(4.84)

(the other case is considered analogously). + Since is a congruence, then from (4.84) it follows that for any process P + P1 + P .P2 + P (4.85) From · (4.81), (4.85), and · the inclusion it follows that for any process P P2 + P .P2 + P Prove that P2 .P
+ 2 +

(4.86) (4.87)

(4.87) equivalent to the following statement: there is a process P2 P2 , such that + (4.88) P2 E P2 Since the set N ames is infinite (by an assumption from section 2.3), then there is an action b Act \ { }, which does not occur in P2 . Statement (4.86) must be true in the case when P has the form b.0, i.e. the following statement must be true: P2 + b.0 .P2 + b.0 Since .P2 + b.0 then 109


(4.89)

E

P

2


· from (4.89), and · from the definition of the relation it follows that there is a process P2 P2 such that P2 + b.0



E

P

2

(4.90)

The case P2 + b.0 = P2 is impossible, because · the left side of this equality does contain the action b, and · the right side of this equality does not contain the action b. Consequently, on the reason of (4.90), we get the statement P2 + b.0

+

E

P

2

(4.91)

From the definition of the operation +, it follows that (4.91) is possible if and only if (4.88) holds. Thus, we have proved that there is a process P2 P2 such that (4.88) holds, i.e. we have proved (4.87). + (4.84) and (4.87) imply that P1 P2 . Theorem 21 . + is the greatest congruence contained in , i.e. for each congruence on the set of all processes the following implication holds:
+

Pro of. + Prove that if (P1 , P2 ) , then P1 P2 . Let (P1 , P2 ) . Since is a congruence, then for each process P (P1 + P, P2 + P ) (4.92)

If , then from (4.92) it follows that for each process P P 1 + P P2 + P
+

(4.93)

According to theorem 20, (4.93) implies that P1 P2 . Theorem 22 . 110


The relations , and have the following property: Pro of. + + The inclusion holds by definition of . + The inclusion follows from · the inclusion , and · from the fact that if processes P1 , P2 are such that P1 P
2 +

+

(4.94)

then this pair of processes satisfies conditions from the definition of the + relation . Note that both inclusions in (4.94) are proper: · a. .0 a.0, but a. .0 a.0 · .0 0, but .0 0 / Theorem 23 . 1. If P1 P2 , then for each a Act a.P1 a.P In particular, for each process P a. .P a.P 2. For any process P P + .P .P 3. For any processes P1 and P2 , and any a Act a.(P1 + .P2 ) + a.P2 a.(P1 + .P2 ) 111
+ + + + 2 + +

(4.95)

(4.96)

(4.97)


4. For any processes P1 and P

2 +

P1 + .(P1 + P2 ) .(P1 + P2 )

(4.98)

Pro of. For each of the above statements we shall construct an OBS+ between its left and right sides. 1. As it was stated in theorem 14 (section 4.8.3), the statement P1 P2 is equivalent to the statement that there is an OBS µ between P1 and P2 . Let s0 and s (1) tively.
0 (2)

be initial states of the processes a.P1 and a.P2 respec-

Then the relation {(s0 , s0 )} µ (1) (2) is an OBS+ between a.P1 and a.P2 . (4.95) follows from · the above statement, and · the statement .P P , which holds according to (4.57). 2. Let P has the form P = (S, s0 , R) and let S(1) ? S(2) be duplicates of respectively, which contain in the ements of these duplicates will be where s is an arbitrary element of the set S in the processes P and .P left side of the statement (4.96). Eldenoted by s(1) and s(2) respectively, the set S .

Let s0 and s0 be initial states of the processes in the left and right sides l r of (4.96) respectively. Then the relation {(s0 , s0 )} {(s(i) , s) | s S, i = 1, 2} l r is OBS+ between left and right sides of the statement (4.96). 3. Let Pi = (Si , s0 , Ri ) (i = 1, 2). We can assume that S1 S2 = . Let i

112


· s0 be an initial state of the process P1 + .P · s0 be an initial state of the process a.(P1 + .P2 ) Note that (4.100) coincides with the right side of (4.97). The left side of (4.97) is strongly equivalent to the process P , which is obtained from (4.100) by adding the transition s
0 a 2

(4.99)

(4.100)

E

s

0 2

it is easily to make sure in this by considering the graph representation of the process P , which has the form
# ! " e ae ce ea s0 e ¡d d e ¡6 e 9 9 d ¡ de ¡ 0 s1 ¡ s0 2 ¡ ¡ c ¡

s

0

6

s

1

...

8

... P1

7 8

P

2

7

It is easy to prove that the process P is observationally congruent to the process (4.100). The sets of states of these processes can be considered as duplicates S(1) and S(2) of one and the same set S , and OBS+ between P and (4.100) has the form {(s(1) , s(2) ) | s S } Since 113 (4.101)


· according to theorem 22, we have the inclusion , and · (4.100) coincides with the right part of (4.97), then we have proved that the left and right sides of the statement (4.97) are observationally congruent. 4. Reasonings in this case are similar to the reasonings in the previous case. We will not explain them in detail, only note that · left part of the statement (4.98) is strongly equivalent to the process P , which has the following graph representation:
# "! ¢ ¢ c ¢

+

s

0

¢ s0 12 ¢ ¡e ¢ e 9 9 ¢ ¡6 e ¡ ¢ e 0 ¢¡ s1 ¡ s0 2 e ¢ ¡ e ¢ e ¢¡ ¡ ec c ¢¡ e

6

s

1



s

2

8

... P1

7 8

... P2

7

where ­ s0 and s0 are initial states of the processes P1 and P2 , and 1 2 ­ s0 is an initial state of the process P1 + P2 12 · the right part of the statement (4.98) (which we denote by P ) is obtained from P by removing of transitions of the form s
+ 0

E

s

1

It is easy to prove that P P . Sets of states of these processes can be considered as duplicates S(1) and S(2) of one and the same set S , and OBS+ between P and P has the form (4.101). 114


4.9.6

Recognition of observational congruence

To solve the problem of recognition for two given finite processes, whether they are observationally congruent, it can be used the following theorem. Theorem 24. Let P1 and P2 be finite processes. The statement P1 P holds if and only if (s0 , s0 ) µ (P1 , P2 ) 12 µ (P1 , P2 ) is an OBS
+ + 2

4.9.7

Minimization of pro cesses with resp ect to observational congruence

To solve the problem of minimizing of finite processes with respect to observational congruence the following theorems can be used. Theorem 25. Let P = (S, s0 , R) be a process. Define a factor-pro cess P of the process P with respect to the equivalence µ (P, P ), as a process with the following components. · States of P are equivalence classes of the set S with respect to the equivalence µ (P, P ). · An initial state of P is the class [s0 ]. · Transitions of the process P have the form [s1 ] where s
+ a 1 a

E

[s 2 ]

E

s2 is an arbitrary transition from R.

Then P (P ). Theorem 26. Let P be a process which is obtained from a process P by removing of unreachable states. Then P has the smallest number of states among all processes that are observationally congruent to P. 115


Chapter 5 Recursive definitions of pro cesses
In some cases, it is more convenient to describe a process by a recursive definition, intsead of explicit description of sets of its states and transitions. In the present chapter we introduce a method of description of processes by recursive definitions.

5.1

Pro cess expressions

In order to formulate a notion of recursive description of a process we introduce a notion of a pro cess expression. A set P E of pro cess expressions (PE) is defined inductively, i.e. we define · elementary PEs, and · rules for constructing new PEs from existing ones. Elementary PEs have the following form. pro cess constants: We assume that there is given a countable set of process constants, and each of them is associated with a certain process, which is called a value of this constant. Each process constant is a PE. 116


There is a process constant, whose value is the empty process 0. This constant is denoted by the same symbol 0. pro cess names: We assume that there is given a countable set of pro cess names, and each process name is a PE. Rules for constructing new PEs from existing ones have the following form. prefix action: For each a Act and each PE P the string a.P is a PE. choice: For any pair of PEs P1 , P2 the string P1 + P2 is a PE. parallel comp osition: For any pair of PEs P1 , P2 the string P1 | P2 is a PE. restriction: For each subset L N ames and each PE P the string P \ L is a PE. renaming: For each renaming f and each PE P the string P [f ] is a PE.

5.2

A notion of a recursive definition of processes

A recursive definition (RD) of pro cesses is a list of formal equations of the form A1 = P1 ... (5.1) An = Pn where · A1 , . . . , An are different process names, and · P1 , . . . , Pn are PEs, satisfying the following condition: for every i = 1, . . . , n each process name, which has an occurrence in Pi , coincides with one of the names of A1 , . . . , An . 117


We shall assume that for each process name A there is a unique RD such that A has an occurrence in this RD. In section 5.5 we define a correspondence, which associates with each PE P some process [[P ]]. To define this correspondence, we shall give first · a notion of an emb edding of processes, and · a notion of a limit of a sequence of embedded processes.

5.3

Emb edding of pro cesses
Pi = (Si , s0 , Ri ) (i = 1, 2) i (5.2)

Let P1 and P2 be processes of the form

The process P1 is said to be emb edded to the process P2 , if there is an injective mapping f : S1 S2 , such that · f (s0 ) = s0 , and 2 1 · for any s , s S1 and any a Act (s s ) R1
a



(f (s ) f (s )) R

a

2

For each pair of processes P1 , P2 the notation P1 P
2

is an abridged notation of the statement that P1 is embedded to P2 . If the processes P1 and P2 have the form (5.2), and P1 P2 , then we can identify P1 with its image in P2 , i.e. we can assume that · S1 S2 · s0 = s 1
0 2

· R1 R2 . Theorem 27. Let P1 P2 . Then · a.P1 a.P
2

118


· P1 + P P2 + P · P1 | P P2 | P · P1 \ L P2 \ L · P1 [f ] P2 [f ]. Below we consider expressions which are built from · processes, and · symbols of operations on processes (a., +, | , \L, [f ]). We call such expressions as expressions over pro cesses. For each expression over processes it is defined a process which is a value of this expression. In the following reasonings we shall denote an expression over the process and its value by the same symbol. Theorem 28 . Let · P be an expression over processes, · P1 , . . . , Pn be a list of all processes occurred in P · P1 , . . . , Pn be a list of processes such that i = 1, . . . , n Pi P
i

· P be an expression which is obtained from P by a replacement for each i = 1, . . . , n each occurrence of the process Pi to the corresponding process Pi . Then P P . Pro of. This theorem is proved by induction on a structure of the expression P . We prove that for each subexpression Q of the expression P Q Q (5.3)

where Q is a subexpression of the expression P , which corresponds to the subexpression Q. 119


base of induction: If Q = Pi , then Q = Pi , and (5.3) holds by assumption. inductive step: From theorem 27 it follows that for each subexpression Q of the expression P the following implication holds: if for each proper subexpression Q1 of Q the following statement holds Q1 Q1 then (5.3) holds. Thus, (5.3) holds for each subexpression Q of P . In particular, (5.3) holds for P .

5.4

A limit of a sequence of emb edded processes
k 0 P P (5.4)

Let {Pk | k 0} be a sequence of processes, such that
k k+1

A sequence {Pk | k 0} satisfying condition (5.4) is called a a sequence of emb edded pro cesses. Define a process lim Pk , which is called a limit of the sequence of emk

bedded processes {Pk | k 0}. Let the processes Pk (k 0) have the form Pk = (Sk , s0 , Rk ) k On the reason of (5.4), we can assume that k 0 · Sk Sk · s0 = s k
+1

0 k+1 +1

· Rk Rk

i.e. the components of the processes Pk (k 0) have the following properties: 120


· S0 S1 S2 . . . · s0 = s0 = s0 = . . . 0 1 2 · R0 R1 R2 . . . The process lim Pk has the form
k

(
k0

Sk , s0 , 0
k 0

Rk )

It is easy to prove that for each k 0 Pk lim P
k k

Theorem 29. Let {Pk | k 0} and {Qk | k 0} be sequences of embedded processes. Then · lim (a.Pk ) = a.( lim Pk )
k k

· lim (Pk + Qk ) = ( lim Pk ) + ( lim Qk )
k k k

· lim (Pk | Qk ) = ( lim Pk ) | ( lim Qk )
k k k k

· lim (Pk \ L) = ( lim Pk ) \ L
k

· lim (Pk [f ]) = ( lim Pk )[f ]
k k

Let · P be a PE, · A1 , . . ., An be a list of all process names occurred in P . Then for every n­tuple of processes P1 , . . ., Pn the notation P (P1 /A1 , . . . , Pn /An ) denotes an expression over processes (as well as its value) obtained from P by replacement for each i = 1, . . . , n each occurrence of the process name Ai on the corresponding process Pi . Theorem 30. Let 121


· P be a PE, and · A1 , . . ., An be a list of all process names occurred in P . Then for every list of sequences of embedded processes of the form {P
(k ) 1

| k 0},

...

{P

(k) n

| k 0}

the following equality holds: P (( lim P
k (k ) 1 )/A1 , (k ) (P1 /A1

. . . , ( lim P ,...,

= lim P
k

(k ) n k ( Pnk) /An )

)/An ) =

Pro of. This theorem is proved by induction on the structure of the PE P , using theorem 29.

5.5

Pro cesses defined by pro cess expressions

In this section we describe a rule which associates with each PE P a process [[P ]], which is defined by this PE. If P is a process constant, then [[P ]] is a value of this constant. If P has one of the following forms a.P1 , P1 + P2 , P1 | P2 , P1 \ L, P1 [f ]

then [[P ]] is a result of applying of the corresponding operation to the process P1 or to the pair of processes (P1 , P2 ), i.e. [[ [[ [[ [[ [[ a.P ]] = a.[[P ]] def P1 + P2 ]] = [[P1 ]] + [[P2 ]] def P1 | P2 ]] = [[P1 ]] | [[P2 ]] def P \ L]] = [[P ]] \ L def P [f ] ]] = [[P ]] [f ]
def

We now describe a rule that associates processes with process names. Let {Ai = Pi | i = 1, . . . , n} be a RD. Define a sequence of lists of processes {(P as follows: 122
(k ) 1

,...,P

(k) n

) | k 0}

(5.5)


·P

(0) def = 1

0, . . . , P

(0) def = n (k)

0

( · if the processes P1 , . . ., Pnk) are already defined, then for each i = 1, . . . , n (k+1) def (k) ( Pi = Pi (P1 /A1 , . . . , Pnk) /An )

We prove that for each k 0 and each i = 1, . . . , n P
(k) i

P

(k+1) i

(5.6)

The proof will proceed by induction on k . base of induction: (0) If k = 0, then by definition Pi coincides with the process 0, which can be embedded in any process. inductive step: Suppose that for each i = 1, . . . , n P
(k-1) i

P

(k) i

.

By definition of the processes from the set (5.5), the following equalities hold: (k) (k-1) ( Pi = Pi (P1 /A1 , . . . , Pnk-1) /An ) (k+1) (k) ( Pi = Pi (P1 /A1 , . . . , Pnk) /An ) The statement P
(k) i

P

(k+1) i

follows from theorem 28.

Define for each i = 1, . . . , n the process [[Ai ]] as the limit [[Ai ]] = lim P
k def (k ) i

From theorem 30 it follows that for each i = 1, . . . , n the following chain of equalities holds: Pi ([[A1 ]]/A1 , . . . , [[An ]]/An ) = (k) = Pi (( lim P1 )/A1 , . . . , ( lim P = =
(k ) lim Pi (P1 /A1 , . . k (k+1) lim (Pi ) = [[Ai k k (k) n

.,P ]]

k (k) n /An

)/An ) =

)=

i.e. the list of processes [[A1 ]], . . . , [[An ]] 123


is a solution of the system of equations, which corresponds to the RD


A1 = P1 ... An = Pn (variables of this system of equations are the process names A1 , . . ., An ).

5.6

Equivalence of RDs


Suppose that there is given a couple of RDs of the form A1 = P ... (1) An = P
(1) (1) 1

and
(1) n



A1 = P ... (2) An = P

(2)

(2) 1

(5.7)
(2) n

For each n-tuple of processes Q1 , . . ., Qn the string P
(j ) i

(Q1 , . . . , Qn )

denotes the following expression on processes (and its value): P
(j ) i

(Q1 /A1 , . . . , Qn /A(j ) ) n

(j )

(i = 1, . . . , n; j = 1, 2)

Let µ be an equivalence on the set of all processes. RDs (5.7) are said to be equivalent with respect to µ, if for · each n­tuple of processes Q1 , . . ., Qn , and · each i = 1, . . . , n the following statement holds: P
(1) i (2) i

(Q1 , . . . , Qn ) , P

(Q1 , . . . , Qn )

µ

Theorem 31. Let µ be a congruence on the set of all processes. For every couple of RDs of the form (5.7), which are equivalent with respect to µ, the processes defined by these RDs, i.e. {[[Ai ]] | i = 1, . . . , n} and {[[Ai ]] | i = 1, . . . , n} are also equivalent with respect to µ, i.e. i = 1, . . . , n [[Ai ]] , [[Ai ]] 124
(1) (2) (1) (2)

µ


5.7

Transitions on P E

There is another way of defining of a correspondence between PEs and processes. This method is related to the concept of transitions on the set P E . Every such transition is a triple of the form (P, a, P ), where P, P P E , and a Act. We shall represent a transition (P, a, P ) by the diagram P
a

E

P

(5.8)

We shall define the set of transitions on P E inductively, i.e. · some transitions will be described explicitly, and · other transitions will be described in terms of inference rules. In this section we assume that each process is a value of some process constants. Explicit transitions are defined as follows. 1. if P is a process constant, then P
a

E

P

where P is a process constant, such that · values of P and P have the form (S, s0 , R) and (S, s1 , R) respectively, and · R contains the transition s 2. a.P
a 0 a

E

s

1

E

P , for any a.P P E

Inference rules for constructing of new transitions on P E from existing ones are defined as follows. 1. if P
a

E

P , then
a

· P +Q · Q+P

E
a

P , and P 125

E


· P |Q · Q|P

a

E
a

P | Q , and Q|P

E

· if L N ames, a = , and name(a) L, then P \L · for each renaming f P [f ] 2. if a = , then from P it follows that P1 | P
2 a 1 f (a) E a

E

P \L

P [f ]

E

P

1

and


P

a ¯ 2

E

P

2

E

P1 | P

2

3. For each RD (5.1) and each i {1, . . . , n} if then Pi Ai
a a

E E

P P

(5.9)

For each PE P P E a process [[P ]], which corresponds to this PE, has the form (P E , P, R) where R is a set of all transitions on P E . Theorem 32. For each RD (5.1) and each i = 1, . . . , n the following statement holds [[Ai ]] Pi ([[A1 ]]/A1 , . . . , [[An ]]/An ) (i.e. the list of processes [[A1 ]], . . . , [[An ]] is a solution (with respect to ) of the system of equations which corresponds to RD (5.1).

126


5.8

A metho d of a pro of of equivalence of pro cesses with use of RDs
+

One of possible methods for proof of an equivalence ( or ) between two processes consists of a construction of an appropriate RD such that both of these processes are components with the same numbers of some solutions of a system of equations related to this RD. The corresponding equivalences are substantiated by theorem 33. To formulate this theorem, we introduce the following auxiliary notion. Let µ be a binary relation of the set of all processes, and let there is given an RD of the form (5.1). A list of processes, defined by the RD, is said to be unique up to µ, if for each pair of lists of processes (Q1 , . . . , Q(1) ) and (Q1 , . . . , Q(2) ) n n which satisfies to the condition i = 1, . . . , n (1) (1) ( [[Qi ]] , Pi (Q1 /A1 , . . . , Q(1) /An ) ) µ n (2) (2) ( [[Qi ]] , Pi (Q1 /A1 , . . . , Q(2) /An ) ) µ n the following statement holds: i = 1, . . . , n [[Qi ]] , [[Qi ]]
(1) (2) (1) (2)

µ

Theorem 33. Let there is given a RD of the form (5.1). 1. If each occurrence of each process name Ai in each PE Pj is contained in a subexpression of the form a.Q, then a list of processes, which is defined by this RD, is unique up to . 2. If · each in a · each only occurrence of each process name Ai in each PE Pj is contained subexpression of the form a.Q, where a = , and occurrence of each process name Ai in each PE Pj is contained in subexpressions of the forms a.Q and Q1 + Q2
+

then a list of processes, defined by this RD, is unique up to . 127


5.9

Problems related to RDs

1. Recognition of existence of finite processes that are equivalent (with + respect to , , ) to processes of the form [[A]]. 2. Construction of algorithms for finding minimal processes which are equivalent to processes of the form [[A]] in the case when these processes are finite. 3. Recognition of equivalence of processes of the form [[A]] (these processes can be infinite, and methods from chapter 4 are not appropriate for them). 4. Recognition of equivalence of RDs. 5. Finding necessary and sufficient conditions of uniqueness of a list of + processes which is defined by a RD (up to , ).

128


Chapter 6 Examples of a pro of of prop erties of pro cesses
6.1 Flow graphs

In this section we describe a notion of a flow graph, which is intended to enhance a visibility and to facilitate an understanding of a relationship between components of complex processes. Each example of a complex process, which is considered in this book, will be accompanied by a flow graph, which corresponds to this process. Let P1 , . . . , Pn be a list of processes. A structural comp osition of the processes P1 , . . ., Pn is an expression S C over processes, such that · S C contains only processes from the list P1 , . . ., Pn , and · each symbol of an operation, which consists in S C , is a symbol of one of the following operations: ­ parallel composition, ­ restriction, ­ renaming. Each structural composition S C can be associated with a diagram, which is called a flow graph (FG) of S C . A FG of a structural composition S C is defined by induction on a structure of S C as follows. 129


1. If S C consists of only a process Pi , then FG of S C is an oval, inside of which it is written an identifier of this process. On the border of this oval it is drawn circles, which are called p orts. Each port corresponds to some input or output action a Act(Pi ), and · an identifier of this action is written near of the port, as a label of the port, · if a is an input action, then the port is white, · if a is an input action then the port is black. For every a Act(Pi ) \ { } there is a unique port on the oval, such that its label is a. 2. If S C = S C1 | S C2 , then a FG of S C is obtained by a disjoint union of FGs of S C1 and S C2 , with drawing of labelled arrows on the disjoint union: for · every black port p1 on one of these FGs, and · every white port p2 on another of these FGs, such that labels of these ports are complementary actions it is drawn an arrow from p1 to p2 with a label name(a), where a is a label of p1 . 3. If S C = S C1 \ L, then a FG of S C is obtained from a FG of S C1 by a removal of labels of ports, whose names belong to L. 4. If S C = S C1 [f ], then a FG of S C is obtained from a FG of S C1 by a corresponfing renaming of labels of ports. If P is a process which is equal to a value of a structural composition S C , then the notation F G(P ) denotes a FG of S C .

6.2

Jobshop

Consider a model of a jobshop, which employs two workers, who use for working one mallet.

130


A behavior of each worker in the jobshop is described by the following process J obber
# "

J obber
T

!

in? E S tart






out!
! put F inish '

g et and work !
c

U sing

where · the actions in ? and out ! are used for interaction of a worker with a client, and denote ­ receiving of a material, and ­ issuance of a finished product respectively, · actions g et and work ! and put ! are used for interaction of a worker with a mallet and denote ­ taking a mallet and working with it, and ­ returning the mallet respectively. The action g et and work ! consists of several elementary actions. We do not detail them and combine them in one action. According to the definition of the process J obber, a worker works as follows: · at first he accepts a material · then he takes the mallet and works 131


· then he puts the mallet · then he gives the finished product · and all these actions are repeated. A behavior of the mallet we present using the following process M allet:
# " et g

M allet '

and work ? E B usy ! put?

(note that the ob ject "mallet" and the process "Mallet" are different concepts). A behavior of the jobshop is described by the process J obshop: J obshop = (J obber | J obber | M allet) \ L where L = {g et and work , put}. A flow graph of the process J obshop has the following form. g et and work
96 eu

in

d e © d 96

g et and work in d d

96 ue

J obber

M allet

J obber

uu e 8787 s d out d d d

put

put

uu 87 out

We now introduce the notion of an abstract worker, about whom we know that he cyclically · accepts a material and · gives finished products 132


but nothing is known about details of his work. A behavior of the abstract worker we describe by the following process Abs J obber:
# "

Abs J obber '

? E Doing ! ! out in

A behavior of an abstract jobshop we describe by the following process Abs J obshop: Abs J obshop = Abs J obber | Abs J obber The process Abs J obshop is used as a sp ecification of the jobshop. This process describes a behavior of the jobshop without details of its implementation. Prove that the process J obshop meets its specification, i.e. J obshop Abs J obshop
+

(6.1)

The process Abs J obshop is a parallel composition of two processes Abs J obber. In order to avoid conflicts with the notations, we choose different identifiers to refer the states of these processes. Suppose, for example, that these processes have the form
# Ai ' ! "

in? out!

E Di

where i = 1, 2. Parallel composition of these processes has the form

133


# "



A1 , A2 '
T

in? E A1 , D2 out! ! T

in? out!
c

in? out!

D1 , A2 '

c in? E D1 , D2 out!

Applying to this process the procedure of minimization with respect to observational equivalence, we get the process
# ' ! "

in? out!

E '

in? out!

E

(6.2)

The process J obshop has 4 · 4 · 2 = 32 states, and we do not present it here because of its bulkiness. After a minimization of this process with respect to observational equivalence, we get a process, which is isomorphic to process (6.2). This means that the following statement holds: J obshop Abs J obshop (6.3)

Because there is no transitions with a label , starting from initial states of processes J obshop and Abs J obshop then on the reason of (6.3) we conclude that (6.1) holds.

6.3

Dispatcher

Suppose that · there is some company which consists of several groups: G1 , . . ., Gn , and 134


· there is a special room in the building, where the company does work, such that any group Gi (i {1, . . . , n}) can use this room to conduct their workshops. There is a problem of non-conflictual use of the room by the groups G1 , . . ., Gn . This means that when one of the groups conducts a workshop in the room, other groups should be banned to hold their workshops in this room. This problem can be solved by use of a special process, which is called a dispatcher. If any group Gi wants to hold a workshop in this room, then Gi should send the dispatcher a request to provide a right to use the room for the workshop. If the dispatcher knows that at this time the room is busy, then he don't allows Gi to use this room. When the room becomes free, the dispatcher sends Gi a notice that he allows to the group Gi use this room. After completion the workshop, the group Gi must send the dispatcher a notice that the room is free. Consider a description of this system in terms of the theory of processes. A behavior of the dispatcher is described by the process D, a graph representation of which consists of the following subgraphs: for each i = 1, . . . , n it contains the subgraph
# ! " s d d d d d

D

reqi ?

reli ?
d d d d Ed i2

c

d

acqi !

i1

i.e. D

n

reqi ?. acqi !. reli ?. D
i=1

Actions from Act(D) have the following meanings: · reqi ? is a receiving of a request from the group Gi 135


· acqi ! is a sending Gi of a notice that Gi may use the room · reli ? is a receiving a message that Gi released the room. In the following description of a behavior of each group Gi · we shall describe only an interaction of Gi ­ with the dispatcher, and ­ with the room and · will not deal with other functions of Gi . We shall denote · a beginning of a workshop in the room by the action start !, and · a completion of the meeting by the action of f inish !. A behavior of the group Gi we describe by a process Gi , which has the following graph representation:
# gi0 ' ! "

reli !

T

gi

4

f inishi !

reqi !
c

T
1

g

i3

gi

acqi ?

E gi2

starti !

i.e. Gi reqi !. acqi ?. start!. f inish!. reli !. Gi . A joint behavior of the dispatcher and the groups can be described as the following process S y s: S y s = (D | G1 | . . . | Gn ) \ L where L = {reqi , acqi , reli | i = 1, . . . , n}. 136


A flow graph of the process S y s for n = 2 has the following form
u 96 u

start

96 ' e

u 96 u

start

rel1 E e acq
1

rel

2

G1

e ' u

u

D

u ' e

acq2 E e G2 req
2

req1 E e

u u 87

u 8

78

7

f inish

f inish

We now show that the processes which represent a behavior of the dispatcher and the groups indeed provide a conflict-free regime of use of the room. The conflict-free property is that · after a start of a workshop in the room of any group (i.e. after an execution the action start! by this group), and · before a completion of this workshop there is no another group which also may hold a workshop in this room (i.e. which also can execute the action start!) until the first group has completed its workshop (i.e. until it has executed the action f inish!). Define a process S pec as follows:
# start! ' ! " f inish! E

i.e. S pec start!. f inish!. S pec. The conflict-free property of the regime of use of the room is equivalent to the following statement: S y s S pec (6.4)

137


To prove this statement, we transform the process S y s, applying several times the expansion theorem: S ys
n


i=1 n




i=1 n




i=1 n




i=1 n




i=1

acqi !. reli ?. D | G1 | . . . . . . . | acqi ?. start!. f inish!. reli !. . . . | Gn reli ?. D | G1 | . . . . . . . . | start!. f inish!. reli !. Gi | . . . | n G reli ?. D | G1 | . . . . .start!. . . . | f inish!. reli !. Gi | . . . | G n reli ?. D | G1 | . .start!. f inish!. . . . | reli !. Gi | . . . | Gn D | G1 | . . . . .start!. f inish!. . . . . | Gi | . . . . . . | Gn
S ys



Gi | . . . \ L


... \ L


... \ L ... ... \ L


\L=

n

=
i=1

. .start!. f inish!. .S y s

Using the rules P +P P we get the statement S y s .start!. f inish!. S y s We now consider the equation X = .start!. f inish!. X (6.5)
+ +

and . .P .P

+

According to theorem 33 from section 5.8, there is a unique (up to ) solution of equation (6.5) . + As shown above, the process S y s is a solution of (6.5) up to .

138


The process .S pec is also a solution of (6.5) up to , because .S pec .start!. f inish!. S pec + .start!. f inish!. ( .S pec) Consequently, the following statement hold: S y s .S pec This statement implies (6.4).
+ +

+

6.4

Scheduler
P1 , . . . , P (6.6)

Suppose that there are n processes
n

and for each i = 1, . . . , n the set Act(Pi ) contains two special actions: · the action i ?, which can be interpreted as a signal Pi starts its regular session · the action i ?, which can be interpreted as a signal Pi completes its regular session We assume that · all the names 1 , . . . , n , 1 , . . . , are different, and · i = 1, . . . , n each name from names(Act(Pi )) \ {i , i } does not belong to the set (6.9). 139
n

(6.7)

(6.8)

(6.9)


Let L be the set (6.9). For each i = 1, . . . , n the actions from the set Act(Pi ) \ {i ?, i ?} are said to be prop er actions of the process Pi . An arbitrary trace of each process Pi may contain any quantity of the actions i ? and i ? in any order. We would like to create a new process P , in which all the processes P1 , . . ., Pn would work together, and this joint work should obey certain regime. The process P must have the form P = (P1 | . . . | Pn | S ch) \ L where the process S ch · is called a scheduler, and · is designed for an establishing of a required regime of an execution of the processes P1 , . . ., Pn . Non-internal actions, which may be executed by the process S ch, must belong to the set {1 !, . . . , n !, 1 !, . . . , n !} (6.10) By the definition of the process P , for each i = 1, . . . , n · the actions i ? and i ? can be executed by the process Pi (6.6) within the process P only simultaneously with an execution of complementary actions by the process S ch, and · an execution of these actions will be invisible outside the process P . Informally speaking, each process Pi , which is executed within the process P , may start or complete its regular session if and only if the scheduler S ch allows him to do it. A regime, which must be respected by the processes P1 , . . ., Pn , during their execution within the process P , consists of the following two conditions. 1. For each i = 1, . . . , n an arbitrary trace of the process Pi , which is executed within the process P , should have the form i ? . . . i ? . . . i ? . . . i ? . . . 140


(where the dots represent proper actions of the process Pi ), i.e. an execution of the process Pi should be a sequence of sessions of the form i ? . . . i ? . . . where each session · starts with an execution of the action i ? · then several proper actions of Pi are executed, · after a completion of the session the action i ? is executed, and · then Pi can execute some proper actions (for example, these actions can be related to a preparation to the next session). 2. The processes P1 , . . ., Pn are obliged to start their new sessions in rotation, i.e. · at first, only P1 may start its first session · then, P2 may start its first session · ... · then, Pn may start its first session · then, P1 may start its second session · then, P2 may start its second session · etc. Note that we do not require that each process Pi may receive a permission to start its k -th session only after the previous process Pi-1 completes its k -th session. However, we require that each process Pi may receive a permission to start a new session, only if Pi executed the action i ? (which signalizes a completion of a previous session of Pi ). Proper actions of the processes P1 , . . ., Pn can be executed in arbitrary order, and it is allowably an interaction of these processes during their execution within the process P . The described regime can be formally expressed as the following two conditions on an arbitrary trace tr T r(S ch) 141


In these conditions we shall use the following notation: if tr T r(S ch) and M Act then tr | M denotes a sequence of actions, which is derived from tr by a removal of all actions which do not belong to M . Conditions which describe the above regime have the following form: tr T r(S ch), i = 1, . . . , n tr | {i ,i } = (i ! i ! i ! i ! i ! i ! . . .) and tr T r(S ch) tr | {1 ,...,n } = (1 ! . . . n ! 1 ! . . . n ! . . .) (6.11)

(6.12)

These conditions can be expressed as observational equivalence of certain processes. To define these processes, we introduce auxiliary notations. 1. Let a1 . . . an be a sequence of actions from Act. Then the string (a1 . . . an ) denotes a process which has the following graph representation
c # a
1

a

n



a2 E E ! "

...

an-E 1


2. Let P be a process, and {a1 , . . . , ak } Act \ { } be a set of actions. The string hide (P, a1 , . . . , ak ) denotes the process ( P | ( a1 ) | . . . | ( ak ) ) \ names({a1 , . . . , ak }) 142 (6.14) (6.13)


Process (6.14) can be considered as a process, which is obtained from P by a replacement on of all labels of transitions of P , which belong to the set (6.13). Using these notations, · condition (6.11) can be expressed as follows: for each i = 1, . . . n hide S ch, 1 !, . . . , i-1 !, i+1 !, . . . , n ! 1 !, . . . , i-1 !, i+1 !, . . . , n ! (i !. i !) (6.15)

and · condition (6.12) can be expressed as follows: hide (S ch, 1 !, . . . , n !) (1 !. . . . n !) (6.16)

It is easy to see that there are several schedulers that satisfy these conditions. For example, the following schedulers satisfy these conditions: · S ch = (1 ! 1 ! . . . n ! n !) · S ch = (1 ! . . . n ! 1 ! . . . n !) However, these schedulers impose too large restrictions on an execution of the processes P1 , . . . , Pn . We would like to construct such a scheduler that allows a maximal freedom of a joint execution of the processes P1 , . . . , Pn within the process P . This means that if at any time · the process Pi has an intention to execute an action a {i ?, i ?}, and · this intention of the process Pi does not contradict to the regime which is described above then the scheduler should not prohibit Pi to execute this action at the current time, i.e. the action a must be among actions, which the scheduler can execute at the current time. The above informal description of a maximal freedom of an execution of a scheduler can be formally clarified as follows: · each state s of the scheduler be associated with a pair (i, X ), where 143


­ i {1, . . . , n}, i is a number of a process, which has the right to start its regular session at the current time ­ X {1, . . . , n} \ {i}, X is a set of active processes at the current time (a process is said to be active, if it started its regular session, but does not completed it yet) · an initial state of the scheduler is associated with a pair (1, ) · a set of transitions of the scheduler consists of ­ transitions of the form s where s is associated with s is associated with i + 1, def next(i) = 1, (i, X ) (next(i), X {i}), where if i < n, and if i = n
i !

E

s

­ and transitions of the form s where s is associated with (i, X ), s is associated with (i, X \ {j }), where j X The above description of properties of a required scheduler can be considered as its definition, i.e. we can define a required scheduler as a process S ch0 with the following components: · a set of its states is the set of pairs of the form {(i, X ) {1, . . . , n} â P ({1, . . . , n}) | i X } · an initial state and transitions of S ch0 are defined as it was described above. 144
j !E

s


The definition of the scheduler S ch0 has a significant deficiency: a size of the set of states of S ch0 exponentially depends on the number of processes (6.6), that does not allow quickly modify such scheduler in the case when the set of processes (6.6) is changed. We can use S ch0 only as an reference, with which we will compare other schedulers. To solve the original problem we define another scheduler S ch. We will describe it · not by explicit description of its states and transitions, but · by setting of a certain expression, which describes S ch in terms of a composition of several simple processes. In the description of the scheduler S ch we shall use new names 1 , . . . , n . Denote the set of these names by the symbol . Process S ch is defined as follows: S ch = (S tart | C1 | . . . | Cn ) \ where · S tart = 1 !. 0 · for each i = 1, . . . , n the process Ci is called a cycler and has the form
# ' "!
def def

(6.17)

i !

T

i ?
c

next(i) !
E

i !

145


A flow graph of S ch in the case n = 4 has the following form:
96

S tart
u 87 1 4 1 u u 96 96 1 © u C1 e u C4 u 4 ' 1 u 87 e 87 T



2

4

c e 96

u 96

2 u C

2

u

u 87

3

Ee

C

3

u 3

u 87

2

3

We give an informal explanation of an execution of the process S ch. The cycler Ci is said to be · disabled if it is in its initial state, and · enabled, if it is not in its initial state. The process S tart enables the first cycler C1 and then "dies". Each cycler Ci is responsible for an execution of the process Pi . The cycler Ci · enables the next cycler Cnext(i) after he gave a permission to the process Pi to start a regular session, and · becomes disabled after he gave a permission to the process Pi to complete a regular session. 146


Prove that process (6.17) satisfies condition (6.16) (we omit checking of condition (6.15)). According to the definition of process (6.14), condition (6.16) has the form (S ch | (1 ?) | . . . | (n ?) ) \ B (1 !. . . . n !) (6.18) where B = {1 , . . . , n }. Let S ch be the left side of (6.18). Prove that + S ch .1 !. . . . n !. S ch
+

(6.19)

Hence by the uniqueness property (with respect to ) of a solution of the equation X = .1 !. . . . n !. X we get the statement S ch ( 1 ! . . . n ! ) which implies (6.18). We will convert the left side of the statement (6.19) so as to obtain the right side of this statement. To do this, we will use properties 8, 11 and 12 of operations on processes, which are contained in section 3.7. We recall these properties: · P \ L = P , if L names(Act(P )) = · (P1 | P2 ) \ L = (P1 \ L) | (P2 \ L), if L names(Act(P1 ) Act(P2 )) = · (P \ L1 ) \ L2 = P \ (L1 L2 ) = (P \ L2 ) \ L
1 +

Using these properties, it is possible to convert the left side of (6.19) as follows. S ch = = (S ch | (1 ?) | . . . | (n ?) ) \ B = ((S tart | C1 | . . . | Cn ) \ ) | (6.20) = \B = | (1 ?) | . . . | (n ?) = (S tart | C1 | . . . | Cn ) \ where Ci = (Ci | (i ?) ) \ {i } 147


Note that for each i = 1, . . . , n the following statement holds: Ci i ?. i !. next(i) !. Ci Indeed, by the expansion theorem, Ci = ((i ?. i !. next(i) !. i !. Ci ) | (i ?) ) \ {i } + i ?. i !. next(i) !. .Ci right side of (6.21) Using this remark and the expansion theorem, we can continue the chain of equalities (6.20) as follows: (S tart | C1 | C2 | . . . | Cn ) \ + (1 !. 0 | 1 ?. 1 !. 2 !. C1 | C2 | . . . | Cn ) \
=S tart C
+ 1

+

(6.21)

+

= + + +

. . . . . . . .

(0 | 1 !. 2 !. (1 !. 2 !. C1 1 !. (2 !. C1 1 !. (2 !. C1 1 !. 1 !. 1 !. 1 !. . . . . . . (C1 | 2 !. . n !. . n !.

C1 | C2 | . . | C2 | . . . | | C2 | . . . | | 2 ?. 2 !.
+

. | Cn ) \ = Cn ) \ + Cn ) \ 3 !. C2 | . . . | Cn ) \

(6.22)

C2

2 !. ... (C1 ( 1

3 !. C2 | . . . | Cn ) \ . . . + . n !. (C1 | . . . | 1 !. Cn ) \ + | . . . | 1 !. Cn ) \ ?. 1 !. 2 !. C1 | . . . | 1 !. Cn ) \
C1
+

. 1 !. . . . n !. . (1 !. 2 !. C1 | . . . | Cn ) \ The underlined expression on the last line of the chain coincides with an expression on the fourth line of the chain, which is observationally congruent to S ch . We have found that the last expression of the chain (6.22) is observationally congruent to the left side and to the right side of (6.19). Thus, the statement (6.19) is proven. A reader is provided as an exercise the following problems. 148


1. To prove · condition (6.15), and · the statement S ch S ch0 , 2. To define and verify a scheduler that manages a set P1 , . . ., Pn of pro cesses with priorities, in which each process Pi is associated with a certain priority, representing a number pi [0, 1], where
n

pi = 1
i=1

The scheduler must implement a regime of a joint execution of the processes P1 , . . ., Pn with the following properties: · for each i = 1, . . . , n a proportion of a number of sessions which are completed by the process Pi , relative to the total number of sessions which are completed by all processes P1 , . . ., Pn , must asymptotically approximate to pi with an infinite increasing of a time of an execution of of the processes P1 , . . ., Pn · this scheduler should provide a maximal freedom of an execution of the processes P1 , . . . , Pn .

6.5

Semaphore
i

Let P1 , . . . , Pn be a list of processes, and for each i = 1, . . . , n the process P has the following form: Pi = (i ? a where · i ? and i ? are special actions representing signals that ­ Pi started an execution of a regular session, and ­ Pi completed an execution of a regular session respectively, and 149
i1

. . . aiki i ?)


· ai 1 , . . . , a

ik

i

are proper actions of the process Pi .

We would like to create such a process P , in which all the processes P1 , . . ., Pn would work together, and this joint work should obey the following regime: · if at some time of an execution of the process P any process Pi started its regular session (by an execution of the action i ?) · then this session must be uninterrupted i.e. all subsequent action of the process P shall be actions of the process Pi , until Pi complete this session (by an execution of the action i ?). This requirement can be expressed in terms of traces: each trace of the process P must have the form i ? a
i1

. . . aiki i ? j ? a

j1

. . . aj

kj

i ? . . .

i.e. each trace tr of the process P must be a concatenation of traces tr1 · tr2 · tr3 . . . where each trace tri in this concatenation represents a session of any process from the list P1 , . . ., Pn . A required process P we define as follows: P = ( P1 [f1 ] | . . . | Pn [fn ] | S em ) \ { , } where · S em is a special process designed to establish the required regime of an execution of the processes P1 , . . ., Pn , this process ­ is called a semaphore, and ­ has the form S em = ( ! ! ) · fi : i , i
def

150


A sp ecification of the process P is represented by the following statement: + P .a11 . . . . a1k1 . P + . . . + (6.23) + .an1 . . . . ankn . P A proof that the process P meets this specification, is performed by means of the expansion theorem: P=( | | P1 [f1 ] | . . . | Pn [fn ] | S em ) \ , } { ?.a11 . . . . .a1k1 .?.P1 [f1 ] | . . . | ?.an1 . . . . .ankn ?.Pn [fn ] | \ { , } !. !. S em a11 . . . . .a1k1 .?.P1 [f1 ] | . . . | . | ?.an1 . . . . .ankn ?.Pn [fn ] | \ { , }+ | !. S em + . . + . ?.a11 . . . . .a1k1 .?.P1 [f1 ] | . . . | + . | an1 . . . . .ankn ?.Pn [fn ] | \ { , } | !. S em ... + .a11 . . . . a1k1 . . P + . . . + .an1 . . . . ankn . . P + .a11 . . . . a1k1 . P + . . . + .an1 . . . . ankn . P

Finally, pay attention to the following aspect. The prefix " ." in each summand of the right side of (6.23) means that a choice of a variant of an execution of the process P at the initial time is determined · not by an environment of the process P , but · by the process P itself. If this prefix was absent, then it would mean that a choice of a variant of an execution of the process P at the initial time is determined by an environment of the process P .

151


Chapter 7 Pro cesses with a message passing
7.1 Actions with a message passing

The concept of a process which was introduced and studied in previous chapters, can be generalized in different ways. One of such generalizations consists of an addition to actions from Act some parameters (or mo dalities), i.e. there are considered processes with actions of the form (a, p) where a Act, and p is a parameter which may have the following meanings: · a complexity (or a cost) of an execution of the action a · a priority (or a desirability, or a plausibility) of the action a with respect to other actions · a time (or an interval of time) at which the action a was executed · a probability of an execution of the action a · or anything else. In this chapter we consider a variant of such generalization, which is related to an addition of messages to actions from Act. These messages are transmitted together with an execution of the actions. 152


Recall our informal interpretation of the concept of an execution of an action: · the action ! is executed by sending of an ob ject whose name is , and · the action ? is executed by receiving of an ob ject whose name is . We generalize this interpretation as follows. We shall assume that processes can send or receive not only ob jects, but also pairs of the form (ob ject, message) i.e. an action may have the form !v and ? v (7.1)

where N ames, and v is a message, that can be · a string of symbols, · a material resource, · a bill, · etc. An execution of the actions ! v and ? v , consists of sending or receiving the ob ject with the message v . Recall that such entities as · a transferred ob ject, and · receiving and sending of ob jects can have a virtual character (more details see in section 2.3). For a formal description of processes that can execute actions of the form (7.1), we generalize the concept of a process.

153


7.2
7.2.1

Auxiliary concepts
Typ es, variables, values and constants

We assume that there is given a set T y pes of typ es, and each type t T y pes is associated with a set Dt of values of the type t. Types can be denoted by identifiers. We shall use the following identifiers: · the type of integers is denoted by int · the type of boolean values (0 and 1) is denoted by bool · the type of messages is denoted by mes · the type of lists of messages is denoted by list. Also, we assume that there are given the following sets. 1. The set V ar, whose elements are called variables. Every variable x V ar · is associated with a type t(x) T y pes, and · can take values in the set Dt(x) , i.e. at different times the variable x can be associated with various elements of the set Dt(x) . 2. The set C on, whose elements are called constant. Every constant c C on is associated with · a type t(c) T y pes, and · a value [[c]] Dt(c) , which is said to be an interpretation of the constant c.

7.2.2

Functional symb ols

We assume that there is given a set of functional symb ols (FSs), and each FS f is associated with · a functional typ e t(f ), which is a list of the form (t1 , . . . , tn ) t where t1 , . . . , tn , t T y pes, and 154 (7.2)


· a function [[f ]] : Dt1 â . . . â Dtn Dt which is called an interpretation of the FS f . Examples of FSs: +, where · the FSs + and - have the functional type (int, int) int the functions [[+]] and [[-]] are the corresponding arithmetic operations · the FS · has the functional type (list, list) list the function [[·]] maps each pair of lists (u, v ) to their concatenation (which is obtained by writing v on the right from u) · the FS head has the functional type list mes the function [[head]] maps each nonempty list to its first element (a value of [[head]] on an empty list can be any) · the FS tail has the functional type list list the function [[tail]] maps each nonempty list u to the list which is derived from u by a removing of its first element (a value of [[tail]] on an empty list can be any) · the FS [ ] has the functional type mes list the function [[ [ ] ]] maps each message to the list which consists only of this message 155 -, ·, head, tail, []


· the FS length has the functional type list int the function [[length]] maps each list to its length (a length of a list is a number of messages in this list)

7.2.3

Expressions

Expressions consist of variables, constants, and FSs, and are constructed by a standard way. Each expression e has a type t(e) T y pes, which is defined by a structure of this expression. Rules of constructing of expressions have the following form. · Each variable or constant is an expression of the type that is associated with this variable or constant. · If ­ f is a FS of the functional type (7.2), and ­ e1 , . . . , en are expressions of the types t1 , . . . , tn respectively then the list f (e1 , . . . , en ) is an expression of the type t. Let e be an expression. If each variable x occurred in e is associated with a value (e), then the expression e can be associated with a value (e) which is defined by a standard way: · if e = x V ar, then (e) = (x) (the value (x) is assumed to be given) · if e = c C on, then (e) = [[c]] · if e = f (e1 , . . . , en ), then (e) = [[f ]]( (e1 ), . . . , (en )) Below we shall use the following notations. · The symbol E denotes the set of all expressions. 156
def def def


· The symbol B denotes the set of expressions of the type bool. Expressions from B are called formulas. In constructing of formulas may be used boolean connectives (¬, , , etc.) interpreted by the standard way. The symbol false formula. denotes a true formula, and the symbol denotes a

Formulas of the form (b1 , b2 ), (b1 , b2 ), etc. we shall write in a more familiar form b1 b2 , b1 b2 , etc. In some cases, formulas of the form b1 . . . b will be written in the form
b1
n

and b1 . . . b

n

... bn

and

b1 ... bn



respectively. · Expressions of the form +(e1 , e2 ), -(e1 , e2 ) and ·(e1 , e2 ) will be written in a more familiar form e1 + e2 , e1 - e2 and e1 · e2 . · Expressions of the form head(e), tail(e), [ ](e), and length(e) will be written in the form e, e , [e] and |e|, respectively. ^ · A constant of the type list, such that [[c]] is an empty list, will be denoted by the symbol .

7.3

A concept of a pro cess with a message passing

In this section we present a concept of a process with a message passing. This concept is derived from the original concept of a process presented in section 2.4 by the following modification. · Among the components of a process P there are the following additional components: 157


­ the component XP , which is called a set of variables of the process P , and ­ the component IP , which is called an initial condition of the process P . · Transitions are labelled not by actions, but by op erators. Before giving a formal definition of a process with a message passing, we shall explain a meaning of the above concepts. For brevity, in this chapter we shall call processes with message passing simply as pro cesses.

7.3.1

A set of variables of a pro cess
XP V ar

We assume that each process P is associated with a set of variables

At any time i of an execution of a process P (i = 0, 1, 2, . . .) each variable x XP is associated with a value i (x) Dt(x) . Values of the variables may be modified during an execution of the process. An evaluation of variables from XP is a family of values associated with these variables, i.e. = { (x) Dt(x) | x XP } The notation E v al(XP ) denotes a set of all evaluations of variables from XP . For each time i 0 of an execution of a process P the notation i denotes an evaluation of variables from XP at this time. Below we shall assume that for each process P all expressions referring to the process P , contain variables only from the set XP .

7.3.2

An initial condition

Another new component of a process P is a formula IP B , which is called an initial condition. This formula expresses a condition on evaluation 0 of variables from XP at the initial time of an execution of P : 0 must satisfy the condition 0 (IP ) = 1 158


7.3.3

Op erators

The main difference between the new definition of a process and the old one is that · in the old definition a label of each transition is an action which is executed by a process of a performance of this transition, and · in the new definition a label of each transition is an op erator i.e. a scheme of an action, which takes a specific form only at the time of a performance of this transition. In a definition of an operator we shall use the same set N ames, which was introduced in section 2.3. A set of all operators is divided into the following four classes. 1. Input op erators, which have the form ?x where N ames and x V ar. An action corresponding to the operator (7.3) is executed by · an input to a process an ob ject of the form (, v ), where ­ is the name referred in (7.3), and ­ v is a message and · a record of the message v in the variable x i.e. after an execution of this action a value of the variable x becomes equal to v . 2. Output op erators, which have the form !e where N ames and e E . An action corresponding to the operator (7.4) is executed by an output from a process an ob ject of the form (, v ), where 159 (7.4) (7.3)


· is the name referred in (7.4), and · v is a value of the expression e on a current evaluation of variables of the process. 3. Assignments (first type of internal operators), which have the form x := e where · x V ar, and · e E , where t(e) = t(x) An action corresponding to the operator (7.5) is executed by an updating of a value associated with the variable x: after an execution of this operator this value becomes equal to a value of the expression e on a current evaluation of variables of the process. 4. Conditional op erators (second type of internal operators), which have the form b where b B . An action corresponding to the operator b is executed by a calculation of a value of the formula b on a current evaluation of variables of the process, and · if this value is 0, then an execution of the whole action is impossible, and · if this value is 1, then the execution is completed. The set of all operators is denoted by the symbol O. (7.5)

7.3.4

Definition of a pro cess
P = (XP , IP , SP , s0 , RP ) P

A pro cess is a 5-tuple P of the form (7.6)

whose components have the following meanings: 160


1. XP V ar is a set of variables of the process P 2. IP B is a formula, called an initial condition of the process P 3. SP is a set of states of the process P 4. s0 SP is an initial state P 5. RP is a subset of the form RP SP â O â SP Elements of RP are called transitions. If a transition from RP has the form (s1 , op, s2 ), then we denote it as s and say that · the state s1 is a start of this transition, · the state s2 is an end of this transition, · the operator op is a lab el of this transition. Also, we assume that for each process P the set XP contains a special variable atP , which takes values in SP .
op 1

E

s

2

7.3.5

An execution of a pro cess

Let P be a process of the form (7.6). An execution of a process P is a bypass of the set SP of its states · starting from the initial state s0 , P · through transitions from RP , and · with an execution of operators which are labels of visited transitions. More detail: at each step i 0 of an execution · the process P is in some state s (s0 = s0 ) P
i

161


· there is defined an evaluation i E v al(XP ) (0 (IP ) must be equal to 1) · if there is a transition from RP starting at si , then the process ­ selects a transition starting at si , which is labelled by such an operator opi that can be executed at the current step (i), (if there is no such transitions, then the process P suspends until such transition will appear) ­ executes the operator opi , and then ­ moves to the state s
i+1

which is the end of the selected transition

· if there is no a transition in RP starting in si , then the process completes its work. For each i 0 an evaluation i · by the evaluation i , and · by the operator opi , which is executed at i-th step of an execution of the process P . A relationship between i , i+1 , and opi has the following form: 1. if opi = ? x, and in the execution of this operator it was inputted a message v , then i+1 (x) = v y XP \ {x, atP } i+1 (y ) = i (y )
+1

is determined

2. if opi = ! e, then in the execution of this operator it is outputted the message i (e) and values of variables from XP \ {atP } are not changed: x XP \ {atP } 3. if opi = (x := e), then i+1 (x) = i (e) x XP \ {x, atP } 162 i+1 (x) = i (x) i+1 (x) = i (x)


4. if opi = b and i (b) = 1, then x XP \ {atP } i+1 (x) = i (x)

We assume that for each i 0 a value of the variable atP with respect to an evaluation i is equal to a state s SP , in which the process P is located at step i, i.e. · 0 (atP ) = s
0 P

· 1 (atP ) = s1 , where s1 is an end of first transition · 2 (atP ) = s2 , where s2 is an end of second transition · etc.

7.4

Representation of pro cesses by flowcharts

In order to increase a visibility, a process can be represented as a flowchart. A language of flowcharts is originated in programming, where use of this language can greatly facilitate a description and understanding of algorithms and programs.

7.4.1

The notion of a flowchart

A flowchart is a directed graph, each node n of which · is associated with an op erator op(n), and · is depicted as one of the following geometric figures: a rectangle, an oval, or a circle, inside of which can be contained a label indicating op(n). An operator op(n) can have one of the following forms. initial op erator:
9 6

8

start I nit

7

(7.7)

c

163


where I nit B is a formula, called an initial condition. assignment op erator: ...
c c

x := e

(7.8)

where · x V ar, · e E , where t(e) = t(x) conditional op erator:

c

...
c 9c6

b
8

-
E 7

(7.9)

+
c

where b B . sending op erator: ...
c c

!e

(7.10)

c

where

164


· N ames is a name (for example, it can be a destination of a message which will be sent), and · e E is an expression whose value is a message which will be sent. receiving op erator: ...
c c

?x

(7.11)

c

where · N ames is a name (for example, it can be an expected source of a message which will be received), and · x V ar is a variable in which a received message will be recorded. choice:

c d ... d d © d

(7.12)

join:
d d d © d c

...

(7.13)

165


Sometimes · a circle representing this operator, and · ends of some edges leading to this circle are not pictured. That is, for example, a fragment of a flowchart of the form
c E c

can be pictured as follows:

E

c

halt:
d d

...
d d

©

(7.14)

halt

Flowcharts must meet the following conditions: · a node of the type (7.7) can be only one (this node is called a start no de) · there is only one edge outgoing from nodes of the types (7.7), (7.8), (7.10), (7.11), (7.13) · there are one or two edges outgoing from nodes of the type (7.9), and

166


­ if there is only one edge outgoing from a node of the type (7.9), then this edge has the label "+", and ­ if there are two edges outgoing from a node of the type (7.9), then one of them has the label "+", and another has the label "-". · there is only one edge leading to a node of the type (7.12) · there is no edges outgoing from a node of the type (7.14)

7.4.2

An execution of a flowchart

An execution of a flowchart is a sequence of transitions · from one node to another along edges, · starting from a start node n0 , and · with an execution of operators which correspond to visited nodes. More detail: each step i 0 of an execution of a flowchart is associated with some node ni called a current no de, and · if ni is not of the type (7.14), then after an execution of an operator corresponded to the node ni it is performed a transition along an edge outgoing from ni to a node which will be a current node at the next step of an execution · if ni is of the type (7.14), then an execution of the flowchart is completed. Let X be a set of all variables occurred in the flowchart. At each step i of an execution (i = 0, 1, . . .) each variable x X is associated with a value i (x). The family {i (x) | x X } · is denoted by i , and · is called an evaluation of variables of the flowchart at i­th step of its execution. 167


The evaluation 0 must meet the initial condition I nit, i.e. the following statement must be true: 0 (I nit) = 1 An operator op(ni ) associated with a current node ni is executed as follows. · If op(ni ) has the type (7.8), then the value i (e) is recorded in x i.e. i+1 (x) = i (e) def y X \ {x} i+1 (y ) = i (y ) · If op(ni ) has the type (7.9) then ­ if i (b) = 1, then a transition along an edge outgoing from ni with a label "+" is performed ­ if i (b) = 0, and there is an edge outgoing from ni with a label "-", then a transition along this edge is performed ­ if i (b) = 0, and there is no an edge outgoing from ni with a label "-", then an execution of op(ni ) is impossbile. · If op(ni ) has the type (7.10) then an execution of this operator consists of a sending the ob ject ( , i (e)) (7.15) if it is possible. If a sending the ob ject (7.15) is impossible, then an execution of op(ni ) is impossbile. · If op(ni ) has the type (7.11) then an execution of this operator consists of ­ a receiving the ob ject ( , v ) (if it is possible), and ­ a recording of v in the variable x, i.e. i+1 (x) = v def y X \ {y } i+1 (y ) = i (y ) 168
def def

(7.16)


If a receiving the ob ject (7.16) is impossible, then an execution of op(ni ) is impossbile. · If a current node ni is associated with an operator of the type (7.12), then ­ among nodes which are ends of edges outgoing from ni it is selected a node n labelled by such an operator, which can be executed at the current time, and ­ it is performed a transition to the node n. If there are several operators which can be executed at the current time, then a selection of the node n is performed non-deterministically. · an operator of the type (7.14) completes an execution of the flowchart.

7.4.3

Construction of a pro cess defined by a flowchart

An algorithm of a construction of a process defined by a flowchart has the following form. 1. At every edge of the flowchart it is selected a point. 2. For · each node n of the flowchart, which has no the type (7.12) or (7.13), and · each pair F1 , F2 of edges of the flowchart such that F1 is incoming in n, and F2 is outgoing from n the following actions are performed: (a) it is drawn an arrow f from a point on F1 to a point on F i. if op(n) has the type (7.8), then label(f ) = (x := e)
def 2

(b) it is drawn a label label(f ) on the arrow f , defined as follows:

169


ii. if op(n) has the type (7.9), and an edge outgoing from n, has a label "+", then def label(f ) = b iii. if op(n) has the type (7.9), and an edge outgoing from n, has a label "-", then label(f ) =
def

¬b

iv. if op(n) has the type (7.10) or (7.11), then label(f ) = op(n). 3. For each node n of the type (7.12) and each edge F outgoing from n, the following actions are performed. Let · p be a point on an edge incoming to n, · p be a point on F , · n be an end of F , and · p be a poing on an edge outgoing from n . Then · an arrow from p to p is replaced on an arrow from p to p with the same label, and · the point p is removed. 4. For each node n of the type (7.13) and each edge F incoming from n, the following actions are performed. Let · p be a point on an edge outgoing from n, · p be a point on F , · n be a start of F , and · p be a poing on an edge incoming to n . Then · an arrow from p to p is replaced on an arrow from p to p with the same label, and · the point p is removed. 170


5. States of a constructed process are remaining points. 6. An initial state s0 is defined as follows. P · If a point which was selected on an edge outgoing from a start node of the flowchart was not removed, then s0 is this point. P · If this point was removed, then an end of an edge outgoing from a start note of the flowchart is a node n of the type (7.13). In this case, s0 is a point on an edge outgoing from n. P 7. Transitions of the process correspond to the pictured arrows: for each such arrow f the process contains a transition s
1 label(E f)

s

2

where s1 and s2 are a start and an end of the arrow f respectively. 8. A set of variables of the process consists of · all variables occurred in any operator of the flowchart, and · the variable atP . 9. An initial condition of the process coincides with the initial condition I nit of the flowchart.

7.5

An example of a pro cess with a message passing

In this section we consider a process "buffer" as an example of a process with a message passing: · at first, we define this process as a flowchart, and · then we transform this flowchart to a standard graph representation of a process.

171


7.5.1

The concept of a buffer

A buffer is a system which has the following properties. · It is possible to input messages to a buffer. A message which is entered to the buffer is stored in the buffer. Messages which are stored in a buffer can be extracted from the buffer. We assume that a buffer can store not more than a given number of messages. If n is a such number, then we shall denote the buffer as Buffer n . · At each time a list of messages c1 , . . . , c
k

(0 k n)

(7.17)

stored in Buffer n is called a content of the buffer. The number k in (7.17) is called a size of this content. The case k = 0 corresponds to the situation when a content of the buffer is empty. · If at a current time a content of Buffer n has the form (7.17), and k < n, then ­ the buffer can accept any message, and ­ after an execution of the action of an input of a message c a content of the buffer becomes c1 , . . . , c k , c · If at the current time a content of Buffer n has the form (7.17), and k > 0, then ­ it is possible to extract the message c1 from the buffer, and ­ after an execution of this operation a content of the buffer becomes c2 , . . . , c
k

Thus, at each time a content of a buffer is a queue of messages, and · each action of an input of a message to a buffer adds this message to an end of the queue, and 172


· each action of an output of a message from the buffer ­ extracts a first message of this queue, and ­ removes this message from the queue. A queue with the above operations is called a queue of the typ e FIFO (First Input - First Output).

7.5.2

Representation of a buffer by a flowchart

In this section we present a formal description of the concept of a buffer by a flowchart. In this flowchart · the operation of an input of a message to a buffer is represented by an action with the name I n, and · the operation of an output of a message from the buffer is represented by an action with the name Out. The flowchart has the following variables: · the variable n of the type int, its value does not change, it is equal to the maximal size of a content of the buffer · the variable k of the type int, its value is equal to a size of a content of the buffer at a current time · the variable f of the type mes, this variable will store messages that will come to the buffer · the variable q of the type list, this variable will store a content of the buffer. A flowchart representing a behavior of a buffer has the following form: (notations used in this flowchart were defined in section 7.2.3)

173


9

6 7

start n>0 q= k=0
8 E' c



k
-

c E

+ c - k>0


+
c

c

In ? f '
c

Out ! q ^
c

q := q · [f ]
c

q := q
c

k := k + 1

k := k - 1

7.5.3

Representation of a buffer as a pro cess

To construct a process Buffer n , which corresponds to the above flowchart, we select points at its edges:

174


9

6 7

start n>0 q= k=0
8 sA E' sB c



k0

-



- D In Ks L
s



s sF E + c c c s E O ut ! q ?f ' s ^ G H s sM c c

sN

q := q · [f ] Os c k := k + 1

q := q
sP c

k := k - 1

In a construction of a process defined by this flowchart, the points A, G, H , K and N will be removed. A standard graph representation of the process Buffer n is the following.

175


k EO

k := k + 1 # := k - 1 EB' P ' "! d d d

kn
d d d d

q := q · [f ]
D'

k c

q := q
d d

C

E

In ? f
c ' L

k>0 In ? f
c

Out ! q ^ Out ! q ^
c EM

F

7.6

Op erations on pro cesses with a message passing

Operations on processes with a message passing are similar to operations which are considered in chapter 3.

7.6.1

Prefix action

Let P be a process, and op be an operator. The process op. P is obtained from P by an adding · a new state s, which will be an initial state in op. P , · a new transition s · all variables from op.
op

E

s0 , and P

176


7.6.2

Alternative comp osition

Let P1 , P2 be processes such that SP1 SP2 = . Define a process P1 + P2 , which is called an alternative comp osition of P1 and P2 , as follows. · sets of its states, transitions, and an initial state are determined by the same way as corresponding components of an alternative composition in chapter 3 (section 3.3) ·X
def P1 + P +P
2

=X

P

1

X
2

P

2

· IP

def
1 2

= IP1 IP
P2

If SP1 S necessary

= , then for a construction of the process P1 + P2 it is

· to replace in SP2 those states that are also in P1 on new states, and · modify accordingly other components of P2 .

7.6.3

Parallel comp osition
1

Let P1 and P2 be processes such that XP1 XP2 = . Define a process P1 | P2 , which is called a parallel comp osition of P and P2 , as follows:

· a set of its states and its initial state are defined by the same way as are defined the corresponding components of the process P1 | P2 in chapter 3 ·X
def P1 + P +P
2

=X

P

1

X
2

P

2

· IP

def
1 2

= IP1 IP

· the set of transitions of the process P1 | P2 is defined as follows: ­ for
op E s of the pro cess P1 , and each transition s1 1 each state s of the process P2

177


the process P1 | P2 contains the transition (s1 , s) ­ for
op E s of the pro cess P2 , and each transition s2 2 each state s of the process P1 op

E

(s1 , s)

the process P1 | P2 contains the transition (s, s2 )
op 1 2 op op

E

(s, s2 )

­ each pair of transitions of the form s s where one of the operators op1 , op2 has the form ? x, and another has the form ! e, where t(x) = t(e) (names in both the operators are equal) the process P1 | P2 contains the transition (s1 , s2 )
x := E e
1 2

E E

s s

1 2

RP RP

1 2

(s1 , s2 )

If XP1 XP2 = , then before a construction of the process P1 | P2 it is necessary to replace variables which occur in both processes on new variables.

7.6.4

Restriction and renaming

Definition of there operations is the same as definition of corresponding operations in chapter 3.

7.7
7.7.1

Equivalence of pro cesses
The concept of a concretization of a pro cess

Let P be a process. We shall denote by C onc(P ) a process in the original sense of this concept (see section 2.4), which is called a concretization of the process P , and has the following components. 178


1. States of C onc(P ) are · all evaluations from E v al(XP ), and · an additional state s0 , which is an initial state of C onc(P ) 2. For · each transition s
op 1

E

s2 of the process P , and

· each evaluation E v al(XP ), such that (atP ) = s C onc(P ) has a transition
a 1

E



if (atP ) = s2 , and one of the following conditions is satisfied: · · · · ­ op = ? x, a = ? v , where v Dt(x) ­ (x) = v , y XP \ {x, atP } (y ) = (y ) ­ op = ! e, a = ! (e) ­ x XP \ {atP } (x) = (x) ­ op = (x := e), a = ­ (x) = (e), y XP \ {x, atP } ­ op = b , (b) = 1, a = ­ x XP \ {atP } (x) = (x) (y ) = (y )

3. For · each evaluation E v al(XP ), such that (IP ) = 1 · and each transition of C onc(P ) of the form C onc(P ) has the transition s From the definitions of 179
0 a a

E



E




· the concept of an execution of a process with a message passing (see section 7.3.5), and · the concept of an execution of a process in the original sense (see section 2.4) it follows that there is a one-to-one correspondence between · the set of all variants of an execution of the process P , and · the set of all variants of an execution of C onc(P ). A reader is invited to investigate the commutativity property of the mapping C onc with respect to the operations on processes i.e. to check statements of the form C onc(P1 | P2 ) = C onc(P1 ) | C onc(P2 ) etc.

7.7.2

Definition of equivalences of pro cesses

We define that every pair (P1 , P2 ) of processes with a message passing is in + the same equivalence (, , , . . .), in which is a pair of concretizations of these processes, i.e. P1 P
2



C onc(P1 ) C onc(P2 ),

etc.

A reader is invited to · explore a relationship of the operations on processes with various equiv+ alences (, , . . .), i.e. to establish properties, which are similar to the properties presented in sections 3.7, 4.5, 4.8.4, 4.9.5 · formulate and prove necessary and sufficient conditions of equivalence + (, , . . .) of processes that do not use the concept of a concretization of a process.

180


7.8
7.8.1

Pro cesses with comp osite op erators
A motivation of the concept of a pro cess with comp osite op erators

A complexity of the problem of an analysis of a process essentially depends on a size of its description (in particular, on a number of its states). Therefore, for a construction of efficient algorithms of an analysis of processes it is required a search of methods to decrease a complexity of a description of analyzed processes. In this section we consider one of such methods. In this section we generalize the concept of a process to the concept of a process with composite operators. A composite operator is a sequential composition of several operators. Due to the fact that we combine a sequence of operators in a single composite operator, we are able to exclude from a description of a process those states which are located on the intermediate steps of this sequence of operators. Also in this section we define the concept of a reduction of processes with composite operators in such a way that a reduced process · has a less complicated description than an original process, and · is equivalent (in some sense) to an original process. With use of the above concepts, the problem of an analysis of a process can be solved as follows. 1. First, we transform an original process P to a process P with composite operators, which is similar to P . 2. Then we reduce P , getting a process P , whose complexity can be significantly less than a complexity of the original process P . 3. After this, we · perform an analysis of P , and · use results of this analysis for drawing a conclusion about properties of the original process P .

181


7.8.2

A concept of a comp osite op erator
Op = (op1 , . . . , opn ) (n 1) (7.18)

A comp osite op erator (CO) is a finite sequence Op of operators

which has the following properties. 1. op1 is a conditional operator. 2. The sequence (op2 , . . . , opn ) · does not contain conditional operators, and · contains no more than one input or output operator. If Op is a CO of the form (7.18), then we shall denote by cond (Op) a formula b such that op1 = b . Let Op be a CO. · Op is said to be an input CO (or an output CO), if if among operators belonging to Op, there is an input (or an output) operator. · Op is said to be an internal CO, if all operators belonging to Op are internal. · If Op is an input CO (or an output CO), then the notation name (Op) denotes a name occurred in Op. · If is an evaluation of variables occurred in cond (Op), then we say that Op is op en on , if (cond (Op)) = 1

7.8.3

A concept of a pro cess with COs

A concept of a pro cess with COs differs from the concept of a process in section 7.3.4 only in the following: labels of transitions of a process with COs are COs. 182


7.8.4

An execution of a pro cess with COs

An execution of a process with COs · is defined in much the same as it is defined an execution of a process in section 7.3.5, and · is also a bypass of a set of its states, ­ starting from an initial state, and ­ with an execution of COs which are labels of visited transitions. Let P = (XP , IP , SP , s0 , RP ) be a process with COs. P At each step i 0 of an execution of P · the process P is in some state si (s0 = s0 ) P · there is defined an evaluation i of variables from X (0 (IP ) = 1, i (atP ) = si )
P

· if there is a transition from RP , starting at si , then the process ­ selects a transition starting at si , which is labelled by a CO Opi with the following properties: Opi is open on i if among operators occurred in Opi there is an operator of the form ? x or ! e then the process P can in the current time execute an action of the form ? v or ! v respectively (if there is no such transitions, then the process P suspends until such transition will appear) ­ executes sequentially all operators occurred in Opi , with a corresponding modification of the current evaluation after an execution of each operator occurred in Opi , and thereafter ­ turns to the state si+1 , which is the end of the selected transition · if there is no a transition in RP starting at si , then the process completes its work. 183


7.8.5

Op erations on pro cesses with COs

Definitions of operations on processes with COs almost coincide with corresponding definitions in section 7.6, so we only point out the differences in these definitions. · In definitions of all operations on processes with COs instead of operators COs are mentioned. · Definitions of the operation " | " differ only in the item, which is related to a description of "diagonal" transitions. For processes with COs this item has the following form: for each pair of transitions of the form O p1E s1 RP1 s1 O p2E s2 s2 RP2 where one of the COs Op1 , Op2 has the form (op1 , . . . , opi , ? x, op and another of the COs has the form (op1 , . . . , opj , ! e, op where ­ t(x) = t(e), ­ the subsequences (opi+1 , . . . , opn ) and (op may be empty the process P1 | P2 has the transition (s1 , s2 ) where Op has the form

Op j +1 j +1 i+1

, . . . , opn )

, . . . , opm )

, . . . , opm )

E

(s1 , s2 )


cond (Op1 ) cond (Op2 ) , op2 , . . . , opi , op2 , . . . , opj , (x := e), opi+1 , . . . , opn , opj +1 , . . . , opm 184


7.8.6

Transformation of pro cesses with a message passing to pro cesses with COs

Each process with a message passing can be transformed to a process with COs by a replacement of labels of its transitions: for each transition s
op 1

E

s

2

its label op is replaced by the CO Op, defined as follows. · If op is a conditional operator, then Op = (op) · If op is ­ an assignment operator, or ­ an input or output operator then Op = ( (remind that
def def

, op) is a true formula)

For each process with a message passing P we denote the corresponding process with COs by the same symbol P .

7.8.7

Sequential comp osition of COs

In this section, we introduce the concept of a sequential comp osition of COs: for some pairs (Op1 , Op2 ) of COs we define a CO, which is denoted as O p1 · O p
2

(7.19)

and is called a sequential comp osition of the COs Op1 and Op2 . A necessary condition of a possibility to define a sequential composition (7.19) is the condition that at least one of the COs Op1 , Op2 is internal. Below we shall use the following notations. 1. For · each CO Op = (op1 , . . . , opn ), and 185


· each assignment operator op the notation Op · op denotes the CO (op1 , . . . , opn , op) 2. For · each internal CO Op = (op1 , . . . , opn ), and · each input or output operator op the notation Op · op denotes the CO (7.20) 3. For · each CO Op = (op1 , . . . , opn ), and · each conditional operator op = b the notation Op · op denotes an ob ject that · either is a CO · or is not defined. This ob ject is defined recursively as follows. If n = 1, then Op · op = ( cond (Op) b ) If n > 1, then · if opn is an assignment operator of the form (x := e), then Op · op = ((op1 , . . . , op
def n-1 def

(7.20)

) · opn (op)) ·opn

()

where ­ opn (op) is a conditional operator, which is obtained from op by a replacement of all occurrences of the variable x on the expression e ­ if the ob ject () is undefined, then Op · op also is undefined 186


· if opn is an output operator, then Op · op is the CO ((op1 , . . . , op
n-1

) · op) · op

n

(7.21)

· if opn is an input operator, and has the form ? x, then Op · op ­ is undefined, if op depends on x, and ­ is equal to CO (7.21), otherwise. Now we can formulate a definition of a sequential composition of COs. Let Op1 , Op2 be COs, and Op2 has the form Op2 = (op1 , . . . , opn ) We shall say that there is defined a sequential comp osition of Op and Op2 , if the following conditions are met: · at least one of the COs Op1 , Op2 is internal · there is no undefined ob jects in the parentheses in the expression (. . . ((Op1 · op1 ) · op2 ) · . . .) · opn (7.22)
2 1

If these conditions are met, then a sequential comp osition Op1 and Op is a value of expression (7.22). This CO is denoted by O p1 · O p
2

7.8.8

Reduction of pro cesses with COs

Let P be a process with COs. A reduction of P is a sequence P =P
0

E

P

1

E

...

E

P

n

(7.23)

of transformations of this process, each of which is performed according to any of the reduction rules described below. Each of these transformations (except the first) is made on the result of the previous transformation. A result of the reduction (7.23) is a result of the last transformation (i.e. the process Pn ). Reduction rules have the following form. 187


Rule 1 (sequential comp osition). Let s be a state of a process with CO, which is not an initial state, and · a set of all transitions of this process with the end s has the form s
1 O p1E

s,

... ,s

Op n

n

E

s

· a set of all transitions of this process with the start s has the form s
Op
1

E

s1 ,

... ,s

Op

m

E

s

m

· s {s1 , . . . , sn , s1 , . . . , sm } · for each i = 1, . . . , n and each j = 1, . . . , m there is defined the sequential composition O pi · O p j Then this process can be transformed to a process · states of which are states of the original process, with the exception of s · transitions of which are ­ transitions of the original process, a start or an end of which is not s, and ­ transitions of the form s
i Opi ·O pj

E

s

j

for each i = 1, . . . , n and each j = 1, . . . , m · ­ an initial state of which, an also ­ a set of variables, and ­ an initial condition coincide with the corresponding components of the original process.

Rule 2 (gluing). Let P be a process with CO, which has two transitions with a common start and a common end: s
Op 1

E

s2 , 188

s1

Op

E

s

2

(7.24)


and labels of these transitions differ only in first components, i.e. Op and Op have the form Op = (op1 , op2 , . . . , opn ) Op = (op1 , op2 , . . . , opn ) Rule 2 is a replacement of the pair of transitions (7.24) on a transition s
Op 1

E

s

2

where Op = ( cond (Op) cond (Op ) , op2 , . . . , opn ) Rule 3 (removal of inessential assignments). Let · P be a process with CO, and · op(P ) be a set of all operators, occurred in COs of P . A variable x XP is said to be inessential, if · x does not occur in ­ conditional operators, and ­ output operators in op(P ), · if x has an occurrence in a right size of any assignment operator from op(P ) of the form (y := e), then the variable y is inessential. Rule 3 is a removal from all COs of all assignment operators of the form (x := e), where the variable x is inessential.

7.8.9

An example of a reduction

In this section we consider a reduction of the process Buffer n , the graph representation of which is given in section 7.5.3. Below we use the following agreements.

189


· If Op is a CO such that cond (Op) = then the first operator in this CO will be omitted. · Operators in COs can be placed vertically. · Brackets, which embrace a sequence of operators consisting in a CO, can be omitted. The original process Buffer n has the following form:
k EO k := k + 1 # := k - 1 EB' P ' "! d d d

kn
d d d d

q := q · [f ]
D'

k c

q := q
d d

C

E

In ? f
c ' L

k>0 In ? f
c

Out ! q ^ Out ! q ^
c EM

F

First reduction step is a removing of the state C (we apply rule 1 for s = C ):

190


k EO

k := k + 1 # := k - 1 EB' P ' "! d d d d

k ©

kn
d d d

q := q

D

k0

d d

E

In ? f
c ' L

Out ! q ^ In ? f
c

Out ! q ^

F

c EM

Since n > 0, then the formula (k < n) (k 0) in the label of the transition from B to D can be replaced by the equivalent formula k 0. Second and third reduction steps are removing of states O and P :
# EB' "! d d d d

q := q · [f ] k := k + 1

k0
©

kn
d d d

q := q k := k - 1

D

0
d d

E

In ? f
c ' L

Out ! q ^ In ? f
c

Out ! q ^

F

c EM

191


Fourth and fifth reduction steps are removing of the states D and E :
# EB' "! d d d d

q := q · [f ] k := k + 1

k0 In ? f


kn Out ! q ^
d d d

q := q k := k - 1

0
d d d

c ' L

In ? f

c

Out ! q ^

F

c EM

Sixth reduction step is removing of the state F :
# EB' "! ¡ ed k0 ¡ ed kn In ? f ¡ ed q := ¡ ^ e d O ut ! q d k := ¡ e d ¡ e d e ¡ d e ¡ d d e ¡ 0
q := q · [f ] k := k + 1

q k-1

192


Seventh and eighth reduction steps consist of an application of rule 2 to the transitions from B to L and from B to M . In the resulting process, we replace · the formula (0 < k < n) (k 0) on the equivalent formula k < n, and · the formula (0 < k < n) (k n) on the equivalent formula k > 0.
# EB' "! ¡e ¡ e ¡ e ¡ e ¡ e ¡ e ¡ e ¡ e ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ L





q := q · [f ] k := k + 1

q := q k := k - 1

k
e e e e

k > 0 ee Out ! q e ^
e e e M

Ninth and tenth reduction steps are removing of states L and M .
k< In ? q := k := n f q · [f ] k+1 k>0 Out ! q ^ q := q k := k - 1 ¤

(7.25)

§ ¦

# EB' "!

¥

The last process is the result of the reduction of Buffer n .

193


7.8.10

A concretization of pro cesses with COs

A concept of a concretization of processes with COs is similar to the concept of a concretization of processes with a message passing (see section 7.7.1). Let P be a process with COs. The notation C onc(P ) denotes a process in the original sense of this concept (see section 2.4), which is called a concretization of the process P , and has the following components. 1. States of C onc(P ) are · all evaluations from E v al(XP ), and · an additional state s0 , which is an initial state of C onc(P ) 2. For · each transition s
Op 1

E

s2 of the process P , and

· each evaluation E v al(XP ), such that ­ (atP ) = s1 , and ­ Op is open on C onc(P ) has the transition
a

E



if (atP ) = s2 , and one of the following cases hold: (a) Op is internal, a = , and the following statement holds:
Op

E



which means the following: if Op has the form (op1 , . . . , opn ) then there is a sequence 1 , . . . , n of evaluations from E v al(XP ), such that · x XP \ {atP } (x) = 1 (x), (x) = n (x), and · i = 2, . . . , n, if opi has the form (x := e), then i (x) = i-1 (e), y XP \ {x, atP } i (y ) = i-1 (y ) 194


(b)

· Op = Op1 · ( ? x) · Op2 , · a = ? v , where v Dt(x) , and · there are evaluations 1 and 2 fromE v al(XP ), such that
O p1E Op2E 1 , 2 2 (x) = v , y XP \ {x, atP }

2 (y ) = 1 (y )

(c)

· Op = Op1 · ( ! e) · Op2 , · there is an evaluation 1 from E v al(XP ), such that
O p1E

1 ,

1

Op

2

E

,

a = ! 1 (e)

3. For · each evaluation E v al(XP ), such that (IP ) = 1 · and each transition of C onc(P ) of the form C onc(P ) has the transition s
0 a a

E



E

.

A reader is invited to investigate a relationship between · a concretization of an arbitrary process with a message passing P , and · a concretization of a process with COs, which is derived by a reduction of the process P .

7.8.11

Equivalences on pro cesses with COs

Let P1 and P2 be processes with COs. We shall say that P1 and P2 are observationally equivalent and denote this fact by P1 P2 if the concretizations C onc(P1 ) and C onc(P2 ) are observationally equivalent in the original sense of this concept (see section 4.8). + Similarly, the equivalence is defined on processes with COs. Using the concept of a reduction of processes with COs, it is possible to define another equivalence on the set of processes with COs. This equivalence 195


· is denoted by , and · is a minimal congruence on the set of processes with COs, with the following property: if P is derived from P by any reduction rule, then r P P (i.e. is the intersection of all congruences on the set of processes with COs, which have the above property). A reader is invited · to investigate a relation between ­ operations on processes with COs, and ­ the equivalences and i.e. to establish properties, which are similar to properties represented in sections 3.7, 4.5, 4.8.4, 4.9.5 · to formulate and justify necessary and sufficient conditions of observational equivalence of processes with COs, without use of the concept of a concretization · explore a relationship between the equivalences , and · find reduction rules such that
r + + r + r

r

7.8.12

A metho d of a pro of of observational equivalence of pro cesses with COs

One of possible methods of a proof of observational equivalence of processes with COs is based on theorem 34 presented below. To formulate this theorem, we introduce auxiliary concepts and notations. 1. Let P be a process with COs. A comp osite transition (CT) in P is a (possibly empty) sequence C T of transitions of the process P of the form CT = such that 196 s
Op 0
1

E

s

1

O p2E

...

O pn E

s

n

(n 0)

(7.26)


· among the COs Op1 , . . . , Opn there is no more than one input or output CO · there is defined the sequential composition (. . . (Op1 · Op2 ) · . . .) · Op
n

which will be denoted by the same symbol C T . If sequence (7.26) is empty, then its sequential composition C T by definitions is the CO ( ). The state s0 is said to be a start of CT (7.26), and the state sn is said to be an end of this CT. The notation s CT
CT 0

E

sn is an abridged record of the statement that

· is a CT with the start s0 and the end sn , and also · is a CO that corresponds to this CT. 2. Let and be formulas. The notation is an abridged record of the statement that the formula is true. 3. Let Op = (op1 , . . . , opn ) be an internal CO, and be a formula. The notation Op() denotes a formula defined recursively: Op() =
def

cond (Op) , (op1 , . . . , opn-1 ) (opn ()),

if n = 1 if n > 1

where opn () denotes the following formula: if opn = (x := e), then opn () is obtained from by a replacement of each occurrence of the variable x on the expression e. 4. Let , be formulas, and Op1 , Op2 be COs.

197


We shall say that the following diagram holds
A B

Op1

Op2

(7.27)

c

c

C

D

if one of the following conditions is met. (a) Op1 and Op2 are internal COs, and the following inequality holds: (Op1 · Op2 )( ) (b) Op1 and Op2 can be represented as sequential compositions Op1 = Op3 · ( ? x) · Op4 Op2 = Op5 · ( ? y ) · Op6 where Op3 , Op4 , Op5 , Op6 are internal COs, and the following inequality holds (Op1 · Op2 )( ) where · Op1 = Op3 · (x := z ) · Op4 · Op2 = Op5 · (y := z ) · Op6 · z is a new variable (i.e. z does not occur in , , Op1 , Op2 ) (c) Op1 and Op2 can be represented as sequential compositions Op1 = Op3 · ( ! e1 ) · Op Op2 = Op5 · ( ! e2 ) · Op
4 6

198


where Op3 , Op4 , Op5 , Op6 are internal COs, and the following inequality holds: (Op3 · Op5 )(e1 = e2 ) (Op3 · Op4 · Op5 · Op6 )( )

Theorem 34. Let P1 and P2 be processes with COs Pi = (XPi , IPi , SPi , s0 i , RPi ) P (i = 1, 2)

which have no common states and common variables. Then P1 P2 , if there is a function µ of the form µ : SP1 â SP2 F m which has the following properties. 1. IP1 IP2 µ(s0 1 , s0 2 ). P P 2. For · each pair (A1 , A2 ) SP1 â SP2 , and · each transition A
Op 1

E

A1 of the process P1 , such that (7.28)

cond (Op) µ(A1 , A2 ) = there is a set of CTs of the process P2 starting from A2 { A2
C TiE

Ai | i 2

}

(7.29)

satisfying the following conditions: (a) the following inequality holds: cond (Op) µ(A1 , A2 )
i

cond (C Ti )

(7.30)

199


(b) for each i

the following diagram holds:
µ(A1 , A2 ) A1 A
2

Op

C Ti

(7.31)

c

c

A1 µ(A1 , Ai ) 2

A

i 2

3. The property symmetrical to previous: for · each pair (A1 , A2 ) SP1 â SP2 , and · each transition A2 holds
Op

E

A2 of the process P2 , such that (7.28)

there is a set of CTs of the process P1 starting from A1 { A1
C TiE

Ai | i 1

}

(7.32)

satisfying the following conditions: (a) inequality (7.30) holds (b) for each i the following diagram holds:
µ(A1 , A2 ) A1 A
2

C Ti

Op

(7.33)

c

c

Ai 1 µ(Ai , A2 ) 1

A

2

200


7.8.13

An example of a pro of of observational equivalence of pro cesses with COs

As an example of a use of theorem 34 prove that Buffer 1 Buf where · is the considered above process Buffer n (see (7.25)) for n = 1, i.e. a process of the form
(k < In ? q := k := 1) ? f q · [f ] k+1 (k > 0) ? Out ! q ^ q := q k := k - 1 ¤

§ ¦

its initial condition is (k = 0) (q = ), and · Buf is a process of the form
# I n ? x a' ! " E

# EA' ! "

¥

b

Out ! x

The initial condition of this process is

.

Define a function µ : {A} â {a, b} F m as follows: µ(A, a) = (k = 0) (q = ) def µ(A, b) = (k = 1) (q = [x]) Check properties 1, 2, and 3 for the function µ. 1. Property 1 in this case is the inequality ((k = 0) (q = )) which is obviously true. 201 ((k = 0) (q = ))
def


2. Check property 2. · For the pair (A, a) we have to consider a left transition in the process Buffer 1 (because for the right transition (7.28) does not satisfied). As (7.29) we take the set consisting of a single transition from a to b. Diagram (7.31) in this case has the form
(k = 0) (q = ) A k<1 In ? f q := q · [f ] k := k + 1
c

a

In ? x

(7.34)

c

A (k = 1) (q = [x])

b

Using the fact that , , F m ( ) (7.35)

write the inequality corresponding to this diagram in the form


k=0 q= k<1 Clearly, this inequality is true.



k+1=1 q · [z ] = [z ]

(7.36)

· For the pair (A, b) we have to consider only the right transition in the process Buffer 1 (because the left transition does not satisfied condition (7.28)). A (7.29) we take the set consisting of a single transition from b to a.

202


Diagram (7.31) in this case has the form
(k = 1) (q = [x]) A k>0 Out ! q ^ q := q k := k - 1
c

b

Out ! x

(7.37)

c

A (k = 0) (q = )

a

Using (7.35), write the in the form k= q= k>

inequality corresponding to this diagram 1 [x 0
q=x ^ ] k-1=0 q =

(7.38)

Obviously, this inequality is true. 3. Check property 3. · For the pair (A, a) and for a single transition from a to b as (7.32) we take the set, consisting of a left transition from A to A. Diagram (7.33) in this case has the form (7.34). As already established, this diagram is correct. · For the pair (A, b) and for a single transition from b to a as (7.32) we take the set, consisting of the right transition from A to A. Daigram (7.33) in this case has the form (7.37). As already justified, this diagram is correct.

7.8.14

Additional remarks

To improve a usability of theorem 34 you can use the following notions and statements.

203


Invariants of pro cesses Let P be a process with CO. A formula I nv with variables from XP is said to be an invariant of the process P , if it has the following properties. · IP I nv · for each transition s
Op

E

s of the process P

­ if Op is internal, then I nv Op(I nv ) ­ if Op is an input CO of the form Op1 · ( ? x) · Op2 , then I nv (Op1 · (x := z ) · Op2 )(I nv ) where z is a variable which does not belong to X
P

­ if Op is an output CO of the form Op1 · ( ! e) · Op2 , then I nv (Op1 · Op2 )(I nv ) Using the concept of an invariant, theorem 34 can be modified as follows. Theorem 35 . Let · P1 and P2 be two processes with COs: Pi = (XPi , IPi , SPi , s0 i , RPi ) P (i = 1, 2)

which have no common states and common variables, and · formulas I nv1 and I nv2 are invariants of the processes P1 and P2 respectively. Then P1 P2 , if there is a function µ of the form µ : SP1 â SP2 F m with the following properties. 1. IP1 IP2 µ(s0 1 , s0 2 ). P P 204


2. For · each pair (A1 , A2 ) SP1 â SP2 , and · each transition A1
Op

E

A1 of the process P1 , such that

2

cond (Op) µ(A1 , A2 ) I nv1 I nv2

=

(7.39)

there is a set of CTs of the process P2 with the start A { A2
C TiE

Ai | i 2

}

(7.40)

satisfying the following conditions: (a) the following inequality holds:


cond (Op) µ(A1 , A2 ) I nv1 I nv2




i

cond (C Ti )

(7.41)

(b) for each i

the following diagram is correct
µ(A1 , A2 )

I nv1 I nv 2 A1



A

2

Op

C Ti

(7.42)

c

c

A1 µ(A1 , Ai ) 2

A

i 2

3. The property, which is symmetrical to the previous one: for 205


· each pair (A1 , A2 ) SP1 â SP2 , and · each transition A2 holds,
Op

E

A2 of the process P2 , such that (7.39)

there is a set of CTs of the process P1 with the start A { A1
C TiE

1

Ai | i 1

}

(7.43)

satisfying the following conditions: (a) the inequality (7.41) holds (b) for each i the following diagram is correct
µ(A1 , A2 )

I nv1 I nv 2 A1



A

2

C Ti

Op

(7.44)

c

c

Ai 1 µ(Ai , A2 ) 1

A

2

Comp osition of diagrams Theorem 36 . Let · , , be formulas · Op1 , Op2 be internal COs, such that the following diagram is correct

206


A B

Op1

Op2

c

c

C

D

· Op1 , Op2 be COs such that the following diagram is correct
C D

Op1

Op2

c

c

E

F

· {Op1 , Op1 } and {Op2 , Op2 } have no common variables. Then the following diagram is correct

207


A B

Op1 · Op1

Op2 · Op2

c

c

E

F

7.8.15

Another example of a pro of of observational equivalence of pro cesses with COs

As an example of a use of theorems from section 7.8.14 prove an observational equivalence of · the process (Buffer n1 [P ass/Out] | Buffer n2 [P ass/I n]) \ {P ass} where P ass {I n, Out}, and · the process Buffer
n1 +n
2

(7.45)

.

Process (7.45) is a sequential composition of two buffers, size of which is n1 and n2 respectively. A flow graph of this process has the form
9 69
n
1

6
n2

In e

Buffer

u P ass e E

Buffer

uO ut 7

8

78

According to the definition of operations on processes with COs (see section 7.8.5), a graph representation of the process (7.45) has the form 208


§ ¦

k1 < n1 I n ? f1 q1 := q1 · [f1 ] k1 := k1 + 1

# EA' "! T

k2 > 0 Out ! q2 ^ q2 := q2 k2 := k2 - 1

¤ ¥

¦¥

(k1 > 0) (k2 < n2 ) f2 := q1 ^ q1 := q1 k1 := k1 - 1 q2 := q2 · [f2 ] k2 := k2 + 1

(7.46)

An initial condition of the process (7.46) is the formula (n1 > 0) (k1 = 0) (q1 = ) (n2 > 0) (k2 = 0) (q2 = ) A graph representation of the process Buffer
k< In ? q := k := n1 + n f q · [f ] k+1
2 n1 +n2

has the form

§ ¦

# Ea' "!

k>0 Out ! q ^ q := q k := k - 1

¤ ¥

An initial condition of the process Buffer

n1 +n

2

is the formula

(n1 + n2 > 0) (k = 0) (q = ) It is easy to verify that the formula


I nv =

def

0 k1 n1 |q1 | = k1 0 k2 n2 |q2 | = k2 n1 > 0 n2 > 0 209




is an invariant of the process (7.46). This fact follows, in particular, from the statement |u| > 0 |u | = |u| - 1 |u · [a]| = |[a] · u| = |u| + 1 which hold for each list u and each message a. As an invariant of the second process we take the formula Define a function µ : {A} â {a} F m as follows: µ(A, a) =
def

.

q = q 2 · q1 k = k2 + k1

Check properties 1, 2, and 3 for the function µ. 1. Property 1 in this case is the inequality


(n1 > 0) (k1 = 0) (q1 = ) (n2 > 0) (k2 = 0) (q2 = ) (n1 + n2 > 0) (k = 0) (q = which is obviously true. 2. Check property 2.





)



q = q2 · q1 k = k2 + k1

· For the left transition of the process (7.46) inequality (7.39) holds. As (7.40) we take the set, the only element of which is the left transition of the process Buffer n1 +n2 . Inequality (7.41) in this case has the form


k1 < n1 q = q2 · q1 (k < n1 + n2 ) k = k2 + k1 I nv that is obviously true. Using (7.35), write an for this case as q = q 2 · q1 k =k +k 2 1 I nv k1 < n1 k





q · [z ] = q2 · q1 · [z ] k + 1 = k2 + k1 + 1

(7.47)

It is easy to check that the last inequality is true. 210


· For the middle (internal) transition of the process (7.46) inequality (7.39) holds. As (7.40) we take the set, the only element of which is an empty CT of the process Buffer n1 +n2 . Inequality (7.41) in this case holds for the trivial reason: its right side is . Using statement (7.35), write an inequality corresponding to diagram (7.42) for this case, in the form


q = q2 · q1 k = k2 + k1 I nv k1 > 0 k2 < n2



q = (q2 · [q1 ]) · q1 ^ k = k2 + 1 + k1 - 1

(7.48)

This inequality follows from ­ the associativity property of of a concatenation, and ­ the statement |u| > 0 which holds for each list u. · For the right transition of the process (7.46) inequality (7.39) holds. A (7.40) we take the set, the only element of which is the right transition of the process Buffer n1 +n2 . Inequality (7.41) in this case has the form




u = [u] · u ^

k2 > 0 q = q2 · q1 (k > 0) k = k2 + k1 I nv that is obviously true. Using the statement (7.35), we write the inequality which corresponds to diagram (7.42) for this case, in the form




q k I k k

= q2 · q1 = k2 + k1 q2 = q ^ ^ nv q = q2 · q1 >0 k - 1 = k2 - 1 + k 2 >0 211




1

(7.49)




This inequality follows from the statement |u| > 0 (u · v )^ = u ^ (u · v ) = u · v

which holds for each pair of lists u, v . 3. Check property 3. · For the left transition of the process Buffer n1 +n2 inequality (7.39) holds. As (7.43) we take the set, consisting of two CTs: ­ the left transition of the process (7.46), and ­ the sequence, which consists of a pair of transitions the first element of which is the middle (internal) transition of the procrss (7.46), and the second is the left transition of the process (7.46) Inequality (7.41) in this case has the form


k q k I

< n1 + n2 k1 > 0 = q 2 · q1 k < n2 (k1 < n1 ) 2 = k2 + k1 k1 - 1 < n1 nv





This inequality is true, and in the proof of this inequality the conjunctive term n1 > 0 (contained in I nv ) is used. The inequalities which correspond to diagrams (7.44) for both elements of the set (7.43), follow from (7.47), (7.48) and theorem 36. · For the right transition of the process Buffer n1 +n2 inequality (7.39) holds. As (7.43) we take the set, consisting of two CTs: ­ the right transition of the process (7.46), and ­ the sequence which consists of a pair of transitions, the first element of which is the middle (internal) transition of the process (7.46), and the second is the right transition of the process (7.46)

212


Inequality (7.41) in this case has the form


k q k I

>0 k1 > 0 = q2 · q1 (k2 > 0) k2 < n2 = k2 + k1 k2 + 1 > 0 nv





This inequality is true, conjunctive term n2 > 0 The inequalities corresp ments of the set (7.43), 36.

and in the proof of this inequality the (contained in I nv ) is used. onding to diagrams (7.44) for both elefollow from (7.48), (7.49) and theorem

7.9

Recursive definition of pro cesses

The concept of a recursive definition of processes with message passing is similar to the concept of a RD presented in chapter 5. The concept of a RD is based on the concept of a pro cess expression (PE) which is analogous to the corresponding concept in section 5.1, so we only point out differences in the definitions of these concepts. · In all PEs operators are used (instead of actions). · Each process name A has a typ e t(A) of the form t(A) = (t1 , . . . , tn ) where i = 1, . . . , n ti T y pes · Each process name A occurs in each PE only together with a list of expressions of corresponding types, i.e. each occurrence of A in each PE P is contained in a subexpression of P of the form A(e1 , . . . , en ) where ­ i = 1, . . . , n ei E ­ (t(e1 ), . . . , t(en )) = t(A) 213 (n 0)


For each PE P the notation f v (P ) denotes a set of free variables of P , which consists of all variables from XP having free occurrences in P . The concepts of a free occurrence and a bound occurrence of a variable in a PE is similar to the analogous concept in predicate logic. Each free occurrence of a variable x in a PE P becomes bound in the PEs (?x).P and (x := e).P . A recursive definition (RD) of pro cesses is a list of formal equations of the form A1 (x11 , . . . , x1k1 ) = P1 ... (7.50) An (xn1 , . . . , xnkn ) = Pn where · A1 , . . . , An are process names, · for each i = 1, . . . , n the list (xi1 , . . . , xiki ) in the left side of i­th equality consists of different variables · P1 , . . . , Pn are PEs, which satisfy ­ the conditions set out in the definition of a RD in section 5.2, and ­ the following condition: i = 1, . . . , n f v (Pi ) = {xi1 , . . . , xiki } We shall assume that for each process name A there is a unique RD such that A has an occurrence in this RD. RD (7.50) can be interpreted as a functional program, consisting of functional definitions. For each i = 1, . . . , n the variables xi1 , . . ., xiki can be regarded as formal parameters of the function Ai (xi1 , . . . , xiki ). A reader is requested to define a correspondence, which associates with each PE of the form A(x1 , . . . , xn ), where · A is a process name, and · x1 , . . . , xn is a list of different variables of appropriate types the process [[A(x1 , . . . , xn )]] Also a reader is invited to investigate the following problems. 214 (7.51)


1. Construction of minimal processes which are equivalent (, , . . .) to processes of the form (7.51). 2. Recognition of equivalence of processes of the form (7.51). 3. Finding necessary and sufficient conditions of uniqueness of the list of processes defined by a RD.

+

215


Chapter 8 Examples of pro cesses with a message passing
8.1
8.1.1

Separation of sets
The problem of separation of sets

Let U, V be a pair of finite disjoint sets, with each element x U V is mapped to a number weig ht(x), called a weight of this element. It is need to convert this pair in a pair of sets U , V , so that · |U | = |U |, |V | = |V | (for each finite set M the notation |M | denotes a number of elements in M ) · for each u U and each v V the following inequality holds: weig ht(u) weig ht(v ) Below we shall call the sets U and V as the left set and the right set, respectively.

8.1.2

Distributed algorithm of separation of sets

The problem of separation of sets can be solved by an execution of several sessions of exchange elements between these sets. Each session consists of the following actions: 216


· find an element mx with a maximum weight in the left set · find an element mn with minimum weight in the right set · transfer ­ mx from the left set to the right set, and ­ mn from the right set to the left set. To implement this idea it is proposed a distributed algorithm, defined as a process of the form (S mall | Larg e) \ {, } (8.1) where · the process S mall executes operations associated with the left set, and · the process Larg e executes operations associated with the right set. A flow graph corresponding to this process has the form
9 69 u 6

Ee Large
u 7

Smal l
e ' 8

78

Below we shall use the following notations: · for each subset W U V the notations max(W ) and min(W ) denote an element of W with maximum and minimum weight, respectively, · for ­ any subsets W1 , W2 U V , and 217


­ any u U V the notations W1 u, are shorthand expressions x W1 weig ht(x) weig ht(u) x W1 weig ht(u) weig ht(x) x W1 , y W2 weig ht(x) weig ht(y ) respectively. A similar meaning have the expressions max(W ), min(W ), W u, u W, W1 W2 u W1 , W1 W2

in which the symbols W , Wi and u denote variables whose values are · subsets of the set U V , and · elements of the set U V respectively.

8.1.3

The pro cesses Smal l and Large

The processes Smal l and Large · can be defined in terms of flowcharts, · then are transformed to the processes with COs, and · reduce it. We will not describe theseflowcharts and their transformations and reductions, we present only the reduced COs. The reduced process Smal l has the following form. I nit = (S = U ).

218


#



! " s d T d d (x < mx) ? mx := max(S ) d (x mx) ? ! mx d U := S d S := S \ {mx} d d c d EB x ?

A

C

(8.2)

S := S {x} mx := max(S )

The reduced process Large has the following form. I nit = (L = V ).
# ! " s d T d d (y > mn) ? ? y d (y mn) ? L := L {y } d V := L d mn := min(L) d d c d Eb mn !

a

c

(8.3)

L := L \ {mn} mn := min(L)

8.1.4

An analysis of the algorithm of separation of sets

The process described by the expression (8.1), is obtained by · a performing of operations of parallel composition and restrictions on the processes (8.2) and (8.3), in accordance with definition (8.1), and · a reduction of the resulting process. The reduced process has the following form:

219


T

Ac

x < mx y mn V := L

?

x mx ? y mn U := S x < mx ? V := L y > mn # ' E Cc E Bb Aa ! " mx := max(S ) y := mx S := S \ {mx} L := L {y } mn := min(L) x mx L := L \ {mn} ? y > mn x := mn U := S S := S {mn} c mx := max(S ) Ca mn := min(L)

(8.4)

This diagram shows that there are states of the process (8.4) (namely, Ac and C a) with the following properties: · there is no transitions starting at these states (such states are said to be terminal) · but falling into these states is not a normal completion of the process. The situation when a process falls in one of such states is called a deadlo ck. The process (8.1) can indeed fall in one of such states, for example, in the case when U = {3} and V = {1, 2} where a weight of each number coincides with its value. Nevertheless, the process (8.1) has the following properties: · this process always terminates (i.e., falls into one of the terminal states - Ac, C c or C a) 220


· after a termination of the process, the following statements hold: SL=U V |S | = |U |, |L| = |V SL |


(8.5)

To justify these properties, we shall use the function f (S, L) = | {(s, l) S â L | weig ht(s) > weig ht(l)} | Furthermore, for an analyzing of a sequence of assignment operators performed during the transition from Aa to B b, it is convenient to represent this sequence schematically as a sequence of the following actions:
y :=max(S ) EL 1. S (transfer of an element y := max(S ) from S toL) def

2. L

x:=min(L) E

S

3. mx := max(S ) 4. mn := min(L) It is not so difficult to prove the following statements. 1. If at a current time i · the process is in the state Aa, and · values Si , Li of the variables S and L at this time satisfy the equation f (Si , Li ) = 0 i.e. the inequality Si Li holds then Si
+1

= Si and Li

+1

= Li .

Furthermore, after an execution of the transition from Aa to B b values of the variables x, y , mx and mn will satisfy the following statement: y = x = mx mn and, thus, a next transition will be the transition from B b to state C c, i.e. the process normally completes its work. Herewith 221


· values of the variables U and V will be equal to Si and Li , respectively, · and, consequently, values of the variables U and V will meet the required conditions |U | = |U |, 2. If at a current time i · the process is in the state Aa, and · values Si , Li of the variables S and L satisfy the inequality f (Si , Li ) > 0 then after an execution of the transition from Aa to B b (i.e., at the time i + 1) new values Si+1 , Li+1 of the variables S and L will satisfy the inequality f (Si+1 , Li+1 ) < f (Si , Li ) (8.6) In addition, the variables x, y , mx, mn at the time i + 1 will satisfy y = max(Si ), x = min(Li ) mx = max(Si+1 ), mn = min(Li+1 ) x < y , x mx, mn y It follows that if at the time i + 1 the process will move from B b to one of the terminal states (Ac, C c or C a), then it is possible (a) either if x = mx (b) or if y = mn In the case (a) the following statement holds: Si whence, using x i+1 +1

|V | = |V |,

U V

mx = x L

i

and Li

+1

Li { y }
+1

Li

(8.7)

222


In the case (b) the following statement holds: Si y = mn L whence, using x +1 i+1

Si {x}

223


8.2

Calculation of a square

Suppose we have a system "multiplier", which has · two input ports with the names I n1 and I n2 , and · one output port with the name Out. An execution of the multiplier is that it · receives at its input ports two values, and · gives their product on the output port. A behavior of the multiplier is described by the process M ul:
c # ! "

Out ! (x · y )
EB

EC

A

I n1 ? x

I n2 ? y

Using this multiplier, we want to build a system "calculator of a square", whose behavior is described by the process S q uare S pec:
# In ? z E ' ! "O ut ! (z 2 )

The desired system we shall build as a composition of · the auxiliary system "duplicator" having ­ an input port I n, and ­ output ports Out1 and Out
2

behavior of which is described by the process Dup:
c # ! "

Out2 ! z
Eb

Ec

a

In ? z

Out1 ! z

i.e. the duplicator copies its input to two outputs, and 224


· the multiplier, which receives on its input ports those values that duplicator gives. A process S q uare, corresponding to such a composition is determined as follows: S q uare = Dup[pass1 /Out1 , pass2 /Out2 ] | def = | M ul[pass1 /I n1 , pass2 /I n2 ]
def

\ {pass1 , pass2 }

A flow graph of the process S q uare has the form (
9 6 u 9 Ee 6

pass

1

In e

Dup
u

pass

M ul
2

uOut 7

Ee 8

8

7

However, the process S q uare does not meet the specification S q uare S pec. This fact is easy to detect by a construction of a graph representation of S q uare, which, by definition operations of parallel composition, restriction and renaming, is the following:

225


c # ! "

Out ! (x · y )


! ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡

aA

aB

aC

In ? z
c d T d d d

In ? z
c

In ? z
c

bA

bB

bC

Out ! (x · y )
d d ¡ ¡ ¡

x := z d
T

cA

¡ y := z ¡ d d¡



cB

cC

Out ! (x · y )

After a reduction of this process we obtain the diagram In ? z x := z # y := z I n ? z E E A1 ' A2 ' A3 ! " Out ! (x · y ) Out ! (x · y ) x := z y := z

(8.8)

which shows that · the process S q uare can execute two input actions together (i.e. without an execution of an output action between them), and · the process S q uare S pec can not do so.

226


The process S q uare meets another specification: S q uare S pec =
def

Buf [pass/Out] | | S q uare S pec[pass/I n]

\ {pass}

where Buf is a buffer which can store one message, whose behavior is represented by the diagram
# In ? x ' ! " O ut ! x E

A flow graph of S q uare S pec has the form
9 69 u 6

In e

Buf

pass e E S q uar e S pec uO ut
7

8

78

A reduced process S q uare S pec has the form
# I n ? x a1 ' ! "z := x

z := x
I n ? x Ea Ea 2' 3 2) Out ! (z

(8.9)

Out ! (z 2 )

The statement that S q uare meets the specification S q uare S pec can be formalized as (8.8) (8.9) (8.10) We justify (8.10) with use of theorem 34. At first, we rename variables of the process (8.9), i.e. instead of (8.9) we shall consider the process v := u
I n ? u Ea Ea 2' 3 2) Out ! (v

# I n ? u a1 ' ! " := u v

(8.11)

Out ! (v 2 )

227


To prove (8.8) (8.11) with use of theorem 34 we define the function µ : {A1 , A2 , A3 } â {a1 , a2 , a3 } F m as follows: · µ(Ai , aj ) = , if i = j · µ(A1 , a1 ) =
def def def

· µ(A2 , a2 ) = (x = y = z = u) · µ(A3 , a3 ) =
def

x=y=v z=u

Detailed verification of correctness of corresponding diagrams left to a reader as a simple exercise.

8.3

Petri nets

One of mathematical models to describe a behavior of distributed systems is a Petri net. A Petri net is a directed graph, whose set of nodes is divisible in two classes: places (V ) and transitions (T ). Each edge connects a place with a transition. Each transition t T is associated with two sets of places: · in(t) = {v V | there is an edge from v to t} · out(t) = {v V | there is an edge from t to v } A marking of a Petri net is a mapping of the form : V {0, 1, 2, . . .} An execution of a Petri net is a transformation of its marking which occurs as a result of an execution of transitions. A marking 0 at time 0 is assumed to be given. If a net has a marking i at a time i, then any of transition t T , which satisfies the condition v in(t) i (v ) > 0 can be executed at time i.
def def

228


If a transition t was executed at time i , then a marking i defined as follows: v in(t) i+1 (v ) := (v ) - 1 v out(t) i+1 (v ) := (v ) + 1 v V \ (in(t) out(t)) i+1 (v ) := (v )

+1

at time i + 1 is

Each Petri net N can be associates with a process PN , which simulates a behavior of this net. Components of the process PN are as follows. · ­ XP ­I
P def
N

= {xv | v V }, (xv = 0 (v )),
v V

def
N

=

­S

def P
N

= {s0 }

· Let t be a transition of the net N , and the sets in(t) and out(t) have the form {u1 , . . . , un } and {v1 , . . . , vm } respectively. Then the process P
N

has a transition from s0 to s0 with the label




(xu1 > 0) . . . (xun > 0) ? xu1 := xu1 - 1, . . . , xun := xun - 1 xv1 := xv1 + 1, . . . , xvm := xvm + 1

229


Chapter 9 Communication proto cols
In this chapter we consider an application of the theory of processes to the problem of modeling and verification of communication protocols (which are called below proto cols).

9.1

The concept of a proto col

A proto col is a distributed system which consists of several interacting components, including · components that perform a formation, sending, receiving and processing of messages (such components are called agents, and messages sent from one agent to another, a called frames) · components of an environment, through which frames are forwarded (such an environment is usually called a communication channel). There are several layers of protocols. In this chapter we consider data link layer protocols.

9.2
9.2.1

Frames
The concept of a frame

Each frame is a string of bits.

230


Passing through an environment, a frame may be distorted or lost (a distortion of a frame is an inverting some bits of this frame). Therefore, each frame must contain · not only an information which one agent wishes to transfer to another agent, but · means allowing to a recipient of the frame to find out whether this frame is distorted during a transmission. Below we consider some methods of detection of distortions in frames. These methods are divided into two classes: 1. methods which allow · not only detect distortions of frames, · but also determine distorted bits of a frame and fix them (discussed in section 9.2.2), and 2. methods to determine only a fact of a distortion of a frame (discussed in section 9.2.3).

9.2.2

Metho ds for correcting of distortions in frames

Methods of detection of distortion in frames, which allow · not only detect the fact of a distortion, but · determine indexes of distorted bits are used in such situations, when a probability that each transmitted frame will be distorted in a transmission of this frame, is high. For example, such a situation occurs in wireless communications. If you know a maximum number of bits of a frame which can be inverted, then for a recognition of inverted bits and their correction methods of error correction co ding can be used. These methods constitute one of directions of the co ding theory. In this section we consider an encoding method with correction of errors in a simplest case, when in a frame no more than one bit can be inverted. This method is called a Hamming co de to correct one error (there are Hamming codes to fix an arbitrary number of errors). The idea of this method is that bits of a frame are divided into two classes:

231


· information bits (which contain an information which a sender of the frame wants to convey to the recipient), and · control bits (values of which are computed on values of information bits). Let · f be a frame of the form (b1 , . . . , bn ) · k is a number of information bits in f · r is a number of control bits in f (i.e. n = k + r) Since a sender can place his information in k information bits, then we can assume that an information that a sender sends to a recipient in a frame f , is a string M , which consists of k bits. A frame which is derived from the string M by addition of control bits, we denote by (M ). For each frame f denote by U (f ) the set of all frames obtained from f by inversion is not more than one bit. Obviously, a number of elements of U (f ) is equal to n + 1. The assumption that during a transmission of the frame (M ) not more than one bit of this frame can be inverted, can be reformulated as follows: the recipient can receive instead of (M ) any frame from the set U ((M )). It is easy to see that the following conditions are equivalent: 1. for each M {0, 1}k a recipient can uniquely reconstruct M having an arbitrary frame from U ((M )) 2. the family {U ((M )) | M {0, 1}k } of subsets of {0, 1}n consists of disjoint subsets. Since · the family (9.1) consists of 2k subsets, and · each of these subsets consists of n + 1 elements then a necessary condition of disjointness of subsets from (9.1) is the inequality (n + 1) · 2k 2n (9.1)

232


which can be rewritten as (k + r + 1) 2r (9.2) It is easy to prove that for every fixed k > 0 the inequality (9.2) (where r is assumed to be positive) is equivalent to the inequality r0 r where r0 depends on k , and is a lower bound on the number of control bits. It is easy to calculate r0 , when k has the form k = 2m - m - 1, where m 1 (9.3)

in this case (9.2) can be rewritten as the inequality 2m - m 2r - r (9.4)

which is equivalent to the inequality m r (because the function 2x -x is monotone for x 1). Thus, in this case the lower bound r0 of the number of control bits is m. Below we present a coding method with correction of one error, in which a number r of control bits is equal to the minimum possible value m. If k has the form (9.3), and r = r0 = m, then n = 2m - 1, i.e. indices of bits of the frame f = (b1 , . . . , bn ) can be identified with m­tuples from {0, 1}m : each index i {1, . . . , n} is identified with a binary record of i (which is complemented by zeros to the left, if it is necessary). By definition, indices of control bits are m­tuples of the form (0 . . . 0 1 0 . . . 0) (1 is at j ­th position) (9.5)

where j = 1, . . . , m. For each j = 1, . . . , m a value of the control bit which has the index (9.5) is equal to the sum modulo 2 values of information bits, indices of which contain 1 at j -th position. When a receiver gets a frame (b1 , . . . , bn ) he checks m equalities bi
ij =1
1

...i

m

=0

(j = 1, . . . , m)

(9.6)

(the sum is modulo 2). The following cases are possible. · The frame is not distorted. In this case, all the equalities (9.6) are correct.

233


· A control bit which has the index (9.5) is distorted. In this case only j ­th equality in (9.6) is incorrect. · An information bit (9.5) is distorted. Let an index of this bit contains 1 at the positions j1 , . . ., jl . In this case among the equalities (9.6) only equalities with numbers j1 , . . ., jl are incorrect. Thus, in all cases, we can · detect it whether the frame is distorted, and · calculate an index of a distorted bit, if the frame is distorted.

9.2.3

Metho ds for detection of distortions in frames

Another class of methods for detection of distortions in frames is related to a detection of only a fact of a distortion. The problem of a calculation of indexes of distorted bits has high complexity. Therefore, if a probability of a distortion in transmitted frames is low (that occurs when a copper or fibre communication channel is used), then more effective is a resending of distorted frames: if a receiver detects that a received frame is distorted, then he requests a sender to send the frame again. For a comparison of a complexity of the problems of · correcting of distortions, and · detection of distortions (without correcting) consider the following example. Suppose that no more than one bit of a frame can be distorted. If a size of this frame is 1000, then · for a correction of such distortion it is needed 10 control bits, but · for a detection of such distortion it is enough 1 control bit, whose value is assumed equal to the parity of the number of units in remaining bits of the frame. One method of coding to detection of distortion is the following: · a frame is divided into k parts, and · in each part it is assigned one control bit, whose value is assumed equal to the parity of the number of units in remaining bits of this part.

234


If bits of the frame are distorted equiprobably and independently, then for each such part of the frame the probability that · this part is distorted, and · nevertheless, its parity is correct (i.e., we consider it as undistorted) is less than 1/2, therefore a probability of undetected distortion is less than 2-k . Another method of coding to detection of distortions is a p olynomial co de (which is called Cyclic Redundancy Check, CRC). This method is based on a consideration of bit strings as polynomials over the field Z2 = {0, 1}: a bit string of the form (bk , bk is regarded as the polynomial bk · xk + bk
-1 -1

, . . . , b1 , b0 )

·x

k-1

+ . . . + b1 · x + b0

Suppose you need to transfer frames of size m+1. Each such frame is considered as a polynomial M (x) of a degree m. To encode these frames there are selected · a number r < m, and · a polynomial G(x) of degree r, which has the form xr + . . . + 1 The polynomial G(x) is called a generator p olynomial. For each frame M (x) its code T (x) is calculated as follows. The polynomial xr · M (x) is divided on G(x) with a remainder: xr · M (x) = G(x) · Q(x) + R(x) where R(x) is a remainder (a degree of R(x) is less than r). A code of the frame M (x) is the polynomial T (x) = G(x) · Q(x) It is easy to see that a size of T (x) is larger Detection of a distortion in a transmission a dividing a received frame T (x) on G(x): we transmitted without a distortion (i.e. a received than a size of M (x) on r. of the frame T (x) is produced by consider that the frame T (x) was frame T (x) coincides with T (x)),
def

235


if T (x) is divisible on G(x) (i.e. T (x) has the form G(x) · Q (x), where Q (x) is a polynomial). If the frame T (x) was transmitted without a distortion, then the original frame M (x) can be recovered by a representation of T (x) as a sum T (x) = xr · M (x) + R(x) where R(x) consists of all monomials in T (x) of a degree < r. If the frame T (x) was transmitted with distortions, then a relation between T (x) and T (x) can be represented as T (x) = T (x) + E (x) where E (x) is a polynomial which · is called a p olynomial of distortions, and · corresponds to a string of bits each component of which is equal to ­ 1 if the corresponding bit of the frame T (x) has been distorted, and ­ 0, otherwise. Thus · if T (x) has been distorted in a single bit, then E (x) = x
i

· if T (x) has been distorted in two bits, then E (x) = xi + xj , · etc. From the definitions of T (x) and E (x) it follows that T (x) is divisible on G(x) if and only if E (x) is divisible on G(x). Therefore, a distortion corresponding to the polynomial E (x), can be detected if and only if E (x) is not divisible on G(x). Let us consider the question of what kinds of distortions can be detected using this method. 1. A single-bit distortion can be detected always, because the polynomial E (x) = xi is not divisible on G(x). 2. A double-byte distortion can not be detected in the case when the corresponding polynomial E (x) = xi + xj = xj · (x
i- j

+ 1)

(i > j )

236


is divisible on G(x): Q(x) : xj · (x
i- j

+ 1) = G(x) · Q(x)

(9.7)

On the reason of a uniqueness of factorization of polynomials over a field the statement (9.7) implies the statement Q1 (x) : x The following fact holds: if G( x ) = x
15 i- j

+ 1 = G(x) · Q1 (x)

(9.8)

+x

14

+1

(9.9)

then for each k = 1, . . . , 32768 the polynomial xk + 1 is not divisible on G(x). Therefore the generator polynomial (9.9) can detect a double-byte distortion in frames of a size 32768. 3. Consider the polynomial of distortions E (x) as a product of the form E (x) = xj · (x
k-1

+ . . . + 1)

(9.10)

The number k in (9.10) is called a size of a packet of errors. k is equal to the size of a substring of a string of distortions (which corresponds to E (x)), which is bounded from left and right by the bits "1". Let E1 (x) be the second factor in (9.10). On the reason of a uniqueness of factorization of polynomials over a field we get that · a distortion corresponding to the polynomial (9.10) is not detected if and only if · E1 (x) is divisible on G(x). Consider separately the following cases. (a) k r, i.e. k - 1 < r. In this case E1 (x) is not divisible on G(x), because a degree of E1 (x) is less than a degree of G(x). Thus, in this case we can detect any distortion.

237


(b) k = r + 1. In this case the polynomial E1 (x) is divisible on G(x) if and only if E1 (x) = G(x). The probability of such coincidence is equal to 2-(r-1) . Thus, a probability that such distortion will not be detected is equal to 2-(r-1) . (c) k > r + 1. It can proved that in this case a probability that such distortion will not be detected is less that < 2-r . 4. If · an odd number of bits is distorted, i.e. E (x) has an odd number of monomials, and · G(x) = (x + 1) · G1 (x) then such a distortion can be detected, because if for some polynomial Q(x) E (x) = G(x) · Q(x) then, in particular E (1) = G(1) · Q(1) that is wrong, since · the left side of (9.11) is equal to 1, and · the right side of (9.11) is equal to 0. In the standard IEEE 802 the following generator polynomial G(x) is used: G(x) = x32 + x26 + x23 + x22 + x16 + x12 + x11 + +x10 + x8 + x7 + x5 + x4 + x2 + x + 1 This polynomial can detect a distortion, in which · a size of a packet of errors is no more than 32, or · it is distorted an odd number of bits. (9.11)

238


9.3
9.3.1

Proto cols of one-way transmission
A simplest proto col of one-way transmission

The protocol consists of the following agents: · the sender · the timer which is used by the sender · the receiver · the channel The purpose of the protocol is a delivery of frames from the sender to the receiver via the channel. The channel is assumed to be unreliable, it can distort and lose transmitted frames. The protocol works as follows. 1. The sender receives a message (which is called a packet) from an agent which is not included in the protocol. This agent is called a sender's network agent (SNA). The purpose of the sender is a cyclic execution of the following sequence of actions: · get a packet from the SNA · build a frame, which is obtained by an applying of a encoding function to the packet, · send this frame to the channel and switch-on the timer · if it comes the signal timeout from the timer, which means that ­ the waiting time of a confirmation of the sent frame has ended, and ­ apparently this frame is not reached by the receiver then send the frame again · if it comes the confirmation signal from the receiver, then ­ this means that the current frame is successfully accepted by the receiver, and ­ the sender can get the next packet from the SNA,

239


build a frame from this packet, etc. A flowchart representing this behavior has the following form:
' sA c

start

In ? x
E sB c

C ! (x)
sC c

start !
sD c

timeout ? '



E

C?

Operators belonging to this flowchart have the following meanings. · I n ? x is a receiving a packet from the SNA, and record this packet to the variable x · C ! (x) is a sending the frame (x) to the channel · start ! is a switching-on of the timer · timeout ? is a receiving of a signal "timeout" from the timer · C ? is a receiving a confirmation signal from the channel. The process represented by this flowchart, is denoted by S ender and has the following form:

240


#

Ar "! rr r rr r rr ? C r In ? x rr r rr r c C ! (x) star t ! rr EC ED B T timeout ? The behavior of the timer is represented by the process T imer having the form start ? t := 1 t=1 timeout ! t := 0

§ ¦

# E ' ! "

¤ ¥

(9.12)

An initial condition of T imer is t = 0. In this model we do not detail a magnitude of an interval between · a switching-on of the timer (the action start ?), and · a switching-off of the timer (the action timeout !). 2. A channel at each time can contain no more than one frame or signal. It can execute the following actions: · receiving a frame from the sender, and ­ sending this frame to the receiver, or ­ sending a distorted frame to the receiver, or ­ loss of the frame · receivng a confirmation signal from the receiver, and ­ sending this signal to the sender, or ­ loss of the signal.

241


The behavior of the channel is described by the following process:


'

S! R?

R!y # cc E '
! " S ?y T



E

(9.13)

R!



In this process, we use the following abstraction: the symbol `' means a "distorted frame". We do not specify exactly, how frames can be distorted in the channel. Each frame which has been received by the channel · either is transferred from the channel to the receiver · or is transformed to the abstract value `', and this value is transferred from the channel to receiver · or disappears, which is expressed by the transition of the process (9.13) with the label 3. The receiver executes the following actions: · receiving a frame from the channel · checking of a distortion of the frame · if the frame is not distorted, then ­ extracting a packet from the frame ­ sending this packet to a process called a receiver's network agent (RNA) (this process is not included in the protocol) ­ sending a confirmation signal to the sender through the channel · if the frame is distorted, then the receiver ignores it (assuming that the sender will be tired to wait a confirmation signal, and will send the frame again)

242


A flowchart representing the above behavior has the following form:
E' as c

start

C! '
sc

C ?f bs c + f = - E

Out ! info (f )

Operators belonging to this flowchart have the following meanings. · C ? f is a receiving of a frame from the channel, and a record it to the variable f · (f = ) is a checking of a distortion of the frame f · Out ! info (f ) is a sending of the packet info (f ), extracted from the frame f , to the RNA · C ! is a sending of the confirmation signal The process represented by this flowchart, is denoted as Receiv er and has the following form:
# rr "! rr T r

a

C?f

f =

rr r C! rr r

rr r

c

b

f = Out ! info (f )

rr r r Ec

The process Protocol, corresponding to the whole system, is defined as a parallel composition (with restriction and renaming) of the above processes:

243




Protocol =

def



S ender [S/C ] | T imer | C hannel | Receiv er [R/C

\ {S, R, star t, timeout}

(9.14)

]

A flow graph of the process P rotocol has the form In Out

e 96 9 u S Ee

u 96 6 u REe

S ender

e '

u

Channel

e ' 7

u

Receiv er (9.15)

S
ue 87 T

8

R

87

start

timeout

c e 'u &

$ %

T imer

In order to be able to analyze the correctness of this protocol is necessary to determine a specification which he must meet. If we want to specify only properties of external actions executed by the protocol (i.e., actions of the form I n ? v and Out ! v ), then the specification can be as follows: the behavior of this protocol coincides with the behavior of the buffer of the size 1, i.e. the process Protocol is observationally equivalent to the process Buf , which has the form
# I n ? x 1' ! " E

2

(9.16)

Out ! x

After a reduction of the graph representation of the process P rotocol we get the diagram

244


B ¨ I n ? x ¨¨ T ¨ ¨ # ¨¨ ¨ r " ! rr r

y := (x) f := y Out ! info (f )
rr c r r

which is observationally equivalent to the diagram
B ¨ I n ? x ¨¨ T ¨

¨¨ # ¨¨ r " ! rr r

Out ! info ((x))
rr c r r

(9.17)

We assume that the function info of extracting of packets from frames is inverse to , i.e. for each packet x info ((x)) = x therefore the diagram (9.17) can be redrawn as follows:
B ¨ I n ? x ¨¨ T ¨ ¨ # ¨¨ ¨ r " ! rr r

Out ! x
rr c r r

(9.18)

245


The process (9.18) can be reduced, resulting in the process
# I n ? x ' ! " E ' ¤ ¥

(9.19)

Out ! x

Out ! x

After a comparing of the processes (9.19) and (9.16) we conclude that these processes can not be equivalent in any acceptable way. For example, · the process (9.16) after receiving the packet x can only ­ send this packet to the RNA, and ­ move to the state of waiting of another packet · while the process (9.19) after receiving the packet x can send this packet to the RNA several times. Such retransmission can occur, for example, in the following version of an execution of the protocol. · First frame which is sent by the sender, reaches the receiver successfully. · The receiver ­ sends the packet, extracted from this frame, to the RNA, and ­ sends a confirmation to the sender through the channel. · This confirmation is lost in the channel. · The sender does not received a confirmation, and sends this frame again, and this frame again goes well. · The receiver perceives this frame as a new one. He ­ sends the packet, extracted from this frame, to the RNA, and ­ sends the confirmation signal to the sender through the channel. · This confirmation again is lost in the channel. · etc. This situation may arise because in this protocol there is no a mechanism through which the receiver can distinguish:

246


· is a received frame a new one, or · this frame was transmitted before. In section 9.3.2 we consider a protocol which has such mechanism. For this protocol it is possible to prove formally its compliance with the specification (9.16).

9.3.2

One-way alternating bit proto col

The protocol described in this section is called the one-way alternating bit proto col, or, in an abbreviated notation, ABP. The protocol ABP is designed to solve the same problem as the protocol in section 9.3.1: delivery of frames from the sender to the receiver via an unreliable channel (which can distort and lose transmitted frames). The protocol ABP · consists of the same agents as the protocol in section 9.3.1 (namely: the sender, the timer, the receiver, and the channel), and · has the same flow graph. A mechanism by which the receiver can distinguish new frames from retransmitted ones, is implemented in this protocol as follows: among the variables of the sender and the receiver there are boolean variables s and r, respectively, values which have the following meanings: · a value of s is equal to a parity of an index of a current frame, which is trying to be sent by the sender, and · a value of r is equal to a parity of an index of a frame, which is expected by the receiver. At the initial time values of s and r are equal to 0 (the first frame has an index 0). As in the protocol in section 9.3.1, the abstract value "" is used in this protocol, this value denotes a distorted frame. The protocol works as follows. 1. The sender gets a packet from the SNA, and · records this packet to the variable x, · builds the frame, which is obtained by an applying of a coding function to the pair (x, s),

247


· sends the frame to the channel, · starts the timer, and then · expects a confirmation of the frame which has been sent. If · the sender gets from the times the signal timeout, and · he does not received yet an acknowledgment from the receiver then the sender retransmits this frame. If the sender receives from the channel an undistorted frame, which contains a boolean value, then the sender analyzes this value: if it coincides with the current value of s, then the sender · inverts the value of the variable s (using the function I nv (x) = 1 - x), and · starts a new cycle of his work. Otherwise, he sends the frame again. The flowchart representing this behavior has the following form:
9 6

8 ' sA c E' sB c

start s=0

7

inv (s)
T

In ? x - C ! (x, s)
sC c

+



T

bit(z ) = s
T

+

-

z=
T sE



start !
sD c

timeout ? '



E

C ?z

248


The process, which corresponds to this flowchart, is denoted by S ender, and has the following form: I nit = (s = 0). z= bit(z ) = s inv (s)

# A' "!

¨¨ ¨ T ¨¨

E

In ? x

¨ ¨ ¨¨ ¨¨ c ¨ ¨ C ! (x, s) star t ! ¨ % EC B T

z= ¨¨ bit(z ) = s ¨¨ ¨

C ?z
ED

timeout ?

2. The channel can contain no more than one frame. It can execute the following actions: · receive a frame from the sender, and ­ either send this frame to the receiver, ­ or send a distorted frame to the receiver, ­ or lose the frame · receive a confirmation frame from the receiver, and ­ either send this frame to the sender, ­ or send the distorted frame to the sender, ­ or lose the frame. The behavior of the channel is represented by the following process:

249








'

S !u R?u

R!y # cc E '
! " S ?y TT



E

(9.20)





S !

R!

3. The receiver upon receiving of a frame from the channel · checks whether the frame is distorted, · and if the frame is not distorted, then the receiver extracts from the frame a packet and a boolean value using functions info and bit, with the following properties: info ((x, b)) = x, bit ((x, b)) = b

The receiver checks whether the boolean value extracted from the frame coincides with the expected value, which is contained in the variable r, and (a) if the checking gave a positive result, then the receiver · transmits the packet extracted from this frame to the RNA · inverts the value of r, and · sends the confirmation frame to the sender through the channel. (b) if the checking gave a negative result, then the receiver sends a confirmation frame with an incorrect boolean value (which will cause the sender to send its current frame again). If the frame is distorted, then the receiver ignores this frame (assuming that the sender will send this frame again on the reason of receiving of the signal timeout from the timer). The flowchart representing the above behavior has the following form:

250


96

87 c E' C ! (1 - r) ' s T as c

start r=0

inv (r)
T

C ?f + f = -


bs c


- E bit(f ) = r

+E

Out ! info (f )

The process represented by this flowchart, is denoted by Receiv er and has the following form: I nit = (r = 0)
# rr "! rr T r rr r C ! (1 - r ) rr C?f f = r rr f = r rr bit(f ) = r r c r E b Ec

a

f = bit(f ) = r Out ! info(f ) inv (r)

The process Protocol, which corresponds to the whole protocol ABP, is defined in the same manner as in section 9.3.1, by the expression (9.14). The flow graph of this process has the form (9.15). The specification of the protocol ABP also has the same form as in section 9.3.1, i.e. is defined as the process (9.16). The reduced process Protocol has the form

251


§¤

s=r
# I n ? x i' ! " s = r T

s=r Out ! x inv (r) c E ¤ j' ¥


(9.21)

inv (s)



s=r Out ! x (s) inv inv (r)

The statement (9.16) (9.21) can be proven, for example, with use of theorem 34, defining the function µ of the form µ : {1, 2} â {i, j } F m as follows:
µ(1, i) def (s = r ) = µ(2, i) def = µ(1, j ) def (s = r ) = def

µ(2, j ) = (s = r)

9.4

Two-way alternating bit proto col

The above protocols implement a data transmission (i.e. a transmission of frames with packets from a NA) only in one direction. In most situations, a data transmission must be implemented in both directions, i.e. each agent, which communicates with a channel, must act as a sender and as a receiver simultaneously. Protocols which implement a data transmission in both directions, are called duplex protocols, or protocols of two-way transmission. In protocols of two-way transmission a sending of confirmations can be combined with a sending of data frames (i.e. frames which contain packets from a NA): if an agent B has successfully received a data frame f from an agent A, then he may send a confirmation of receipt of the frame f not separately, but as part of his data frame. In this section we consider the simplest correct protocol of two-way transmission.

252


This protocol · is a generalization of ABP (which is considered in section 9.3.2), and · is denoted as ABP-2. ABP-2 also involves two agents, but behavior of each agent is described by the same process, which combines the processes S ender and Receiv er from ABP. Each frame f , which is sent by any of these agents, contains · a packet x, and · two boolean values: s and r, where ­ s has the same meaning as in ABP: this is a boolean value associated with the packet x, and ­ r is a boolean value associated with a packet in the last received undistorted frame. To build a frame, the encoding function is used. To extract a packet and boolean values s and r from a frame the functions info, seq and ack are used. These functions have the following properties: info ((x, s, r)) = x seq ((x, s, r)) = s ack ((x, s, r)) = r Also, agents use the inverting function inv to invert values of the boolean variables. Each sending/receiving agent is associated with a timer. A behavior of the timer is described by the process T imer, which is represented by the diagram (9.12). A flow graph of the protocol is as follows:

253


eu 96 9 u

In

1

Out1

eu 9 6 u

In

2

Out2

6

C1

Ee u

C2

Ee u

Ag ent1

e '

Channel

e ' 7

Ag ent2 (9.22)

C1
ue 87 T start timeout
1 1

8

C2

ue 87 T start timeout
2 2

c e 'u &

$ %

c e 'u &

$ %

T imer1

T imer2

The process describing the behavior of sending/receiving agents, is represented by the following flowchart:
9 6 E' 8

start s, r = 0
c 7

inv (s)
T

+ - ack (f ) = s ' inv (r) T - c T T + C ! (x, s, 1 - r) seq (f ) = r E Out ! info (f ) E' c - + f = T E T

In ? x

start !
c

timeout ? '



C ?f

This flowchart shows that the agent sends a frame with its next packet only after receiving a confirmation of receiving of its current packet. The flowchart describing the behavior of a specific agent (i.e. Ag ent1 or Ag ent2 ), is obtained from this flowchart by assigning the corresponding index (1 or 2) to the variables and names, included in this flowchart.

254


The behavior of the channel is described by the process (9.20) [ C1 /S, C2 /R ] The reader is requested · to define the process S pec, which is a specification of this protocol, and · to prove that this protocol meets the specification S pec.

9.5

Two-way sliding window proto cols

ABP-2 is practically acceptable only when a duration of a frame transmission through the channel is negligible. If a duration of a frame transmission through the channel is large, then it is better to use a conveyor transmission, in which the sender may send several frames in a row, without waiting their confirmation. Below we consider two protocols of two-way conveyor transmission, called sliding window proto cols (SWPs). These protocols are extensions of ABP-2. They · also involve two sending/receiving agents, and behavior of each of these agent is described by the same process, combining functions of a sender and a receiver · an analog of a boolean value associated with each frame is an element of the set Zn = { 0 , . . . , n - 1 } where n is a fixed integer of the form 2k . An element of the set Zn , associated with a frame, is called a numb er of this frame.

9.5.1

The sliding window proto col using go back n

The first SWP is called SWP using go back n. The process which describes a behavior of a sending/receiving agent of this protocol, has the array x[n] among its variables. Components of this array may contain packets which are sent, but not yet confirmed. A set of components of the array x, which contain such packets at the current time, is called a window. Three variables of the process are related to the window:

255


· b (a lower bound of the window) · s (an upper bound of the window), and · w (a number of packets in the window). Values of the variables b, s and w belong to the set Zn . At the initial time · the window is empty, and · values of the variables b, s and w are equal to 0. Adding a new packet to the window is performed by execution of the following actions: · this packet is written in the component x[s], and it is assumed that the number s is associated with this packet · upper bound of the window s increases by 1 modulo n, i.e. new value of s is assumed to be ­ s + 1, if s < n - 1, and ­ 0, if s = n - 1, and · w (the number of packets in the window) is increased by 1. Removing a packet from the window is performed by execution of the following operations: · b (the lower bound of the window) is increased by 1 modulo n, and · w (the number of packets in the window) is decreased by 1 i.e. it is removed a packet whose number is equal to the lower bound of the window. To simplify an understanding of the operations with a window you can use the following figurative analogy: · the set of components of the array x can be regarded as a ring (i.e. after the component x[n - 1] is the component x[0]) · at each time the window is a connected subset of this ring, · during the execution of the process this window is moved on this ring in the same direction.

256


If the window size reaches its maximum value (n - 1), then the agent does not accept new packets from his NA until the window size is not reduced. An ability to receive a new packet is defined by the boolean variable enable: · if the value is 1, then the agent can receive new packets from his NA, and · if 0, then he can not do receive new packets. If the agent receives an acknowledgment of a packet whose number is equal to the lower bound of the window, then this packet is removed from the window. Each component x[i] of the array x is associated with a timer, which determines a duration of waiting of confirmation from another agent of a receiving of the packet contained in the component x[i]. The combination of these timers is considered as one process T imers, which has an array of t [n] of boolean variables. This process is defined as follows: I nit = (t = (0, . . . , 0)) t [j ] = 1 start ? i timeout ! j t [i] := 1 # t [j ] := 0
§ ¦ E ' "! T ¦¥ ¤ ¥

(9.23)

stop ? i t [i] := 0

The right arrow in this diagram is the abbreviation for a set of n transitions with labels t [n - 1] = 1 t [0] = 1 timeout ! (n - 1) timeout ! 0 ... t [n - 1] := 0 t [0] := 0 Note that in this process there is the operator stop ? i, an execution of which prematurely terminates a corresponding timer. The protocol has the following features · If a sending/receiving agent has received a signal timeout from any timer, then the agent sends again all packets from his window. · If an agent has received a confirmation of a packet, then all previous packets in the window are considered also as confirmed (even if their confirmations were not received). Each frame f , which is sent by any of the sending/receiving agents of this protocol, contains

257


· a packet x, · a number s, which is associated with the packet x (by definition, s is also associated with the frame f ) · a number r, which is a number associated with a last received undistorted frame. To build a frame, the encoding function is used. To extract the components from the frames, the functions info, seq and ack, are used. These functions have the following properties: info ((x, s, r)) = x seq ((x, s, r)) = s ack ((x, s, r)) = r The description of the process, representing the behavior of an agent of the protocol, we give in a flowchart form, which easily can be transformed to a flowchart. In this description we use the following notations. · The symbols + and - denote addition and subtraction modulo n.
n n

· The symbol r denotes a variable with has values at Zn . A value of r is equal to a number of an expected frame. The agent sends to his NA a packet, extracted from such a frame f , whose number seq (f ) coincides with a value of the variable r. If a frame f is such that seq (f ) = r, then ­ the packet info (f ) in this frame is ignored, and ­ it is taken into account only the component ack (f ). · The notation send is the abbreviation of the following group of operators:
C ! (x[s], s, r - 1) n

send =

start ! s
n

s := s + 1



· The notation between(a, b, c) is the abbreviation of the formula ab
258


· The expression (w < n - 1) in the operator enable := (w < n - 1) has a value ­ 1, if the inequality w < n - 1 holds, and ­ 0, otherwise. The process representing the behavior of a sending/receiveng agent of this protocos is the following:
9 6 E'

8

7 c ' enable = 1

start enable = 1 w, b, s, r = 0

enable := (w < n - 1) '
T E C ?f + E f =

+
c c

-

I n ? x[s] send w := w + 1

timeout ? i s := b i := 1
E c

+ Out ! info (f ) ' seq (f ) = r r := r + 1 E- n w := w - 1 1c + between stop ! b ' b := b + 1 (b, ack (f ), s)
n



c



E

send + i := i + 1 ' i w -
c

( )

0

-
c

The reader is requested · to define a process "channel" for this protocol (channel contains an ordered sequence of frames, which may distort and disappear) · to define a specification S pec of this protocol, and · to prove that the protocol meets the specification S pec. In conclusion, we note that this protocol is ineffective if a number of distortions in the frame transmission is large.

259


9.5.2

The sliding window proto col using selective rep eat

The second SWP differs from the previous one in the following: an agent of this protocol has two windows. 1. First window has the same function, as a window of the first SWP (this window is called a sending window). The maximum size of the sending window is m = n/2, where n has the same status as described in section 9.5.1 (in particular, frame numbers are elements of Zn ). 2. Second window (called a receiving window) is designed to accommodate packets received from another agent, which can not yet be transferred to a NA, because some packets with smaller numbers have not received yet. A size of the receiving window is m = n/2. Each frame f , which is sent by a sending/receiving agent of this protocol, has 4 components: 1. k is a type of the frame, this component can have one of the following three values: · data (data frame) · ack (frame containing only a confirmation) · nak (frame containing a request for retransmission) ("nak" is an abbreviation of "negative acknowledgment") 2. x is a packet 3. s is a number associated with the frame 4. r is a number associated with the last received undistorted packet. If a type of a frame is ack or nak , then second and third components of this frame are fictitious. To build a frame, the encoding function is used. To extract the components from the frames, the functions kind, info, seq and ack are used. These functions have the following properties: kind info seq ack ( ( ( ( (k (k (k (k , , , , x, x, x, x, s, s, s, s, r r r r )) )) )) )) = = = = k x s r
def

260


The process describing the behavior of a sending/receiveng agent has the following variables. 1. Arrays x[m] and y [m], designed to accommodate the sending window and the receiving window, respectively. 2. Variables enable, b, s, w, having · the same sets of values, and · the same meaning as they have in the previous protocol. 3. Variables r, u, values of which · belong to Zn , and · are equal to lower and upper bounds respectively of the receiving window. If these is a packet in the receiving window, a number of which is equal to the lower boundary receiving window (i.e. r), then the agent · transmits this packet to his NA, and · increases by 1 (modulo n) values of r and u. 4. Boolean array arriv ed[m] whose components have the following meaning: arriv ed[i] = 1 if and only if an i­th component of the receiving window contains a packet which is not yet transmitted to the NA. 5. Boolean variable no nak , which is used with the following purpose. If the agent receives · a distorted frame, or · a frame, which has a number different from the lower boundary of the receiving window (i.e. r) then he sends to his colleague a request for retransmission of a frame whose number is r. This request is called a Negative Acknowledgement (NAK).

261


The boolean variable no nak is used to avoid multiple requests for a retransmission of the same frame: This variable is set to 1, if NAK for a frame with the number r has not yet been sent. When a sending/receiveng agent gets an undistorted frame f of the type data, it performs the following actions. · If the number seq (f ) falls into the receiving window, i.e. the following statement holds: between(r, seq (f ), u) where the predicate symbol between has the same meaning as in the previous protocol (see (9.24)), then the agent ­ extracts a packet from this frame, and ­ puts the packet in its receiving window. · If the condition from the previous item does not satisfied (i.e. the number seq (f ) of the frame f does not fall into the receiving window) then ­ a packet in this frame is ignored, and ­ only the component ack (f ) of this frame is taken into account. The following timers are used by the sending/receiving agent. 1. An array of m timers, whose behavior is described by the process T imers (see (9.23), with the replacement of n on m). Each timer from this array is intended to alert the sending/receiving agent that · a waiting of a confirmation of a packet from the sending window with the corresponding number is over, and · it is necessary to send a frame with this packet again 2. Additional timer, whose behavior is described by the following process: I nit = (t = 0) t=1 start ack timer ? ack timeout ! #t := 0 t § := 1
¦ E ' ! " T ¦¥

¤ ¥

stop ack timer ? t := 0

262


This timer is used with the following purpose. A sending by an agent of confirmations of frames received from another agent can be done as follows: the confirmation is sent (a) as a part of a data frame, or (b) as a special frame of the type ack . When the agent should send a confirmation conf, he · starts the auxiliary timer (i.e. executes the action start ack timer !), · if the agent has received a new packet from his NA before a receiving of the signal timeout from the auxiliary timer, then the agent ­ builds a frame of the type data, with consists of this packet, and the confirmation conf as the component ack ­ sends this frame to the colleague · if after an expiration of the auxiliary timer (i.e., after receiving the signal ack timeout) the agent has not yet received a new packet from his NA, then he sends the confirmation conf by a separate frame of the type ack . The description of the process, representing the behavior of an agent of the protocol, we give in a flowchart form, which easily can be transformed to a flowchart. In this description we use the following notations and agreements. 1. If i is an integer, then the notation i%m denotes a remainder of the division of i on m. 2. If · mass is a name of an array of m components (i.e. x, y , arriv ed, etc.) and · i is an integer then the notation mass[i] denotes the element mass[i%m].

263


3. A notation of the form send(k ind, i) is the abbreviation of the following group of operators:
C ! (k ind, x[i], i, r - 1) n

send(k ind, i) =

if (k ind = nak ) then no nak := 0 stop ack timer !

if (k ind = data) then star t ! (i%m)

4. The notation between(a, b, c) has the same meaning as in the previous protocol. 5. If any oval contains several formulas, then we assume that these formulas are connected by the conjunction (). 6. In order to save a space, some expressions of the form f (e1 , . . . , en ) are written in two lines (f in the first line, and the list (e1 , . . . , en ) in the second line) The process which represents a behavior of an agent of this protocol, has the following form:
9 6

start enable = 1 w, b, s, r = 0 u = m = n/2 no nak = 1 arriv ed = (0 . . . 0)
8 enable = 1 '

E' 7 c

enable := (w < m)

' T

+
c

d d c d

E C ?f

- + E f = E no nak = 1 - + c

send(nak , 0)
d c

I n ? x [s] send(data, s) s := s + 1
n

timeout ? i send(data, i)
c

ack timeout ? f rame send(ack , 0) processing
c c c

w := w + 1

264


The fragment frame processing in this diagram has the following form.
1 0 ( )

seq (f ) = r no nak = 1
T- d

+ E send(nak , 0) start ack timer !
c 6

c

+

d9 d d between

k ind(f ) = data

send (data, ack (f ) + 1)
n

ar riv ed [seq (f )] = 0 +E arriv ed d - y [seq d8 7 (f - d c d 6 9 d - ' ar r iv k ind(f ) = nak + '

(r, seq (f ), u)

[seq (f )] := 1 )] := info (f )
c

ed [r] = 1

between (b, ack (f ) + 1, s)
n

+ cT

8 c T 9c

Out ! 7y [r ] no nak := 1 arr 6 iv ed [r ] := 0 r := r + 1
n

E-

w := w - 1 '+ stop ! (b%m) b := b + 1
n

8

between (b, ack (f ), s) -
c

u := u + 1
7
n

start ack timer !

The reader is requested · to define a process "channel" for this protocol (channel contains an ordered sequence of frames, which may distort and disappear) · to define a specification S pec of this protocol, and · to prove that the protocol meets the specification S pec.

265


Chapter 10 History and overview of the current state of the art
Theory of processes combines several research areas, each of which reflects a certain approach to modeling and analysis of processes. Below we consider the largest of these directions.

10.1

Robin Milner

The largest contribution to the theory of processes was made by outstanding English mathematician and computer scientist Robin Milner (see [1] - [5]). He was born 13 January 1934 near Plymouth, in the family of military officer, and died 20 March 2010 in Cambridge. Since 1995 Robin Milner worked as a professor of computer science at University of Cambridge (http://www.cam.ac.uk). From January 1996 to October 1999 Milner served as a head of Computer Lab at University of Cambridge. In 1971-1973, Milner worked in the Laboratory of Artificial Intelligence at Stanford University. From 1973 to 1995 he worked at Computer Science Department of University of Edinburgh (Scotland), where in 1986 he founded the Laboratory for Foundation of Computer Science. From 1971 until 1980, when he worked at Stanford and then in Edinburgh, he made a research in the area of automated reasoning. Together with colleagues he developed a Logic for Computable Functions (LCF), which · is a generalization of D. Scott's approach to the concept of computability, and · is designed for an automation of formal reasoning.

266


This work formed the basis for applied systems developed under the leadership of Milner. In 1975-1990 Milner led the team which developed the Standard ML (ML is an abbreviation of "Meta-language"). ML is a widely used in industry and education Programming Language. A semantics of this language has been fully formalized. In the language Standard ML it was first implemented an algorithm for inference of polymorphic types. The main advantages of Standard ML are · an opportunity of operating with logic proofs, and · means of an automation of a construction of logical proofs. Around 1980 Milner developed his main scientific contribution - a Calculus of Communicating Systems (CCS, see section 10.2). CCS is one of the first algebraic calculi for an analysis of parallel processes. In late 1980, together with two colleagues he developed a -calculus, which is the main model of the behavior of mobile interactive systems. In 1988, Milner was elected a Fellow of the Royal Society. In 1991 he was awarded by A. M. Turing Award ­ the highest award in the area of Computer Science. The main ob jective of his scientific activity Milner himself defined as a building of a theory unifying the concept of a computation with the concept of an interaction.

10.2

A Calculus of Communicating Systems (CCS)

A Calculus of Communicating Systems (CCS) was first published in 1980 in Milner's book [89]. The standard textbook on CCS is [92]. In [89] presented the results of Milner's research during the period from 1973 to 1980. The main Milner's works on models of parallel processes made at this period: · papers [84], [85], where Milner explores the denotational semantics of parallel processes · papers [83], [88], where in particular, it is introduced the concept of a flow graph with synchronized ports · [86], [87], in these papers the modern CCS was appeared. The model of interaction of parallel processes, which is used in CCS,

267


· is based on the concept of a message passing, and · was taken from the work of Hoare [71]. In the paper [66] · a strong and observational equivalences are studied, and · it is introduced the logic of Hennessy-Milner. The concepts introduced in CCS were developed in other approaches, the most important of them are · the -calculus ([53], [97], [94]), and · structural operational semantics (SOS), this approach was established by G. Plotkin, and published in the paper [104]. More detail historical information about CCS can be found in [105].

10.3

Theory of communicating sequential processes (CSP)

Theory of Communicating Sequential Processes (CSP) was developed by English mathematician and computer scientist Tony Hoare (C.A.R. Hoare) (b. 1934). This theory arose in 1976 and was published in [71]. A more complete summary of CSP is contained in the book [73]. In the CSP it is investigated a model of communication of parallel processes, based on the concept of a message passing. It is considered a synchronous interaction between processes. One of the key concepts of CSP is the concept of a guarded command, which is borrowed from Dijkstra's work [52]. In [72] it is considered a model of CSP, based on the theory of traces. The main disadvantage of this model is the lack of methods for studying of the deadlock property. This disadvantage is eliminated in the other model CSP (failure model), introduced in [46].

10.4

Algebra of communicating pro cesses (ACP)

Jan Bergstra and Jan Willem Klop in 1982 introduced in [37] the term "process algebra" for the first order theory with equality, in which the ob ject variables

268


take values in the set of processes. Then they have developed approaches led to the creation of a new direction in the theory of processes - the Algebra of Communicating Processes (ACP), which is contained in the papers [39], [40], [34]. The main ob ject of study in the ACP logical theories, function symbols of which correspond to operations on processes (a., +, etc). In [19] a comparative analysis of different points of view on the concept of a process algebra can be found.

10.5

Pro cess Algebras

The term pro cess algebra (PA), introduced by Bergstra and Klop, is used now in two meanings. · In the first meaning, the term refers to an arbitrary theory of first order with equality, the domain of interpretation of which is a set of processes. · In the second meaning, the term denotes a large class of directions, each of which is an algebraic theory, which describes properties of processes. In this meaning, the term is used, for example, in the title of the book "Handbook of Process Algebra" [42]. Below we list the most important directions related to PA in both meanings of this term. 1. Handbook of PA [42]. 2. Summary of the main results in the PA: [19]. 3. Historical overviews: [27], [28], [15]. 4. Different approaches related to the concept of an equivalence of processes: [101], [59], [57], [58], [56]. 5. PA with the semantics of partial orders: [44]. 6. PA with recursion: [91], [47]. 7. SOS-model for the PA: [21], [38]. 8. Algebraic methods of verification: [63]. 9. PA with data (actions and processes are parameterized by elements of the data set)

269


· PA with data µ-CRL · [62] (there is a software tool for verification on the base of presented approach). · PSF [79] (there is a software tool). · Language of formal specifications LOTOS [45]. 10. PA with time (actions and processes are parameterized by times) · PA with time based on CCS: [114], [99]. · PA with time based on CSP: [107]. Textbook: [109]. · PA with time on the base of ACP: [29]. · Integration of discrete and dense time relative and absolute time: [32]. · Theory ATP: [100]. · Account of time in a bisimulation: [33]. · Software tool UPPAAL [74] · Software tool KRONOS [116] (timed automata). · µ-CRL with time: [111] (equational reasonings). 11. Probabilistic PA (actions and processes are parameterized by probabilities). These PAs are intended for combined systems research, which simultaneously produced verification, and performance analysis. · Pioneering work: [64]. · Probabilistic PA, based on CSP: [76] · Probabilistic PA, based on CCS: [69] · Probabilistic PA, based on ACP: [31]. · PA TIPP (and the associated software tool): [60]. · PA EMPA: [43]. · In the works [21] and [23] it is considered simultaneous use of conventional and probabilistic alternative composition of processes. · In the paper [51] the concept of an approximation of probabilistic processes is considered. 12. Software related to PAs

270


· Concurrency Workbench [98] (PAs similar to CCS). · CWB-NC [117]. · CADP [54]. · CSP: FDR http://www.fsel.com/

10.6

Mobile Pro cesses

Mobile processes describe a behavior of distributed systems, which may change · a configuration of connections between their components, and · structure of these components during their functioning. Main sources: 1. the -calculus (Milner and others): · the old handbook: [53], · standard reference: [97], · textbooks: [94], [8], [10], [9] · page on Wikipedia: [14] · implementation of the -calculus on a distributed computer system: [115]. · application of the -calculus to modeling and verification of security protocols: [12]. 2. The ambient calculus: [48]. 3. Action calculus (Milner): [93] 4. Bigraphs: [95], [96]. 5. Review of the literature on mobile processes: [11]. 6. Software tool: Mobility Workbench [112]. 7. Site www.cs.auc.dk/mobility Other sources:

271


· R. Milner's lecture "Computing in Space" [6], which he gave at the opening of the building named by B.Gates built for the Computer Lab of Cambridge University, May 1, 2002. In the lecture the concepts of an "ambient" and a "bigraph" are introduced. · R. Milner's lecture "Turing, Computing and Communication" [7].

10.7

Hybrid Systems

A hybrid system is a system, in which · values of some variables change discretely, and · values of other variables are changed continuously. Modeling of a behavior of such systems is produced by using of differential and algebraic equations. The main approaches: · Hybrid Process Algebras: [41], [49], [113]. · Hybrid automata: [22] [77]. For simulation and verification of hybrid systems it is developed a software tool HyTech [68].

10.8

Other mathematical theories and software to ols, asso ciated with a mo deling and an analysis of pro cesses

1. Page in Wikipedia on the theory of processes [13]. 2. Theory of Petri nets [103]. 3. Theory of partial orders [80]. 4. Temporal logic and model checking [106], [118]. 5. Theory of traces [108]. 6. Calculus of invariants [24].

272


7. Metric approach (which studies the concept of a distance between processes): [35], [36]. 8. SCCS [90]. 9. CIRCAL [82]. 10. MEIJE [25]. 11. Process algebra of Hennessy [65]. 12. Models of processes with infinite sets of states: [119], [120], [121], [122]. 13. Synchronous interacting machines: [123], [124], [125]. 14. Asynchronous interacting extended machines: [126] - [130]. 15. Formal languages SDL [131], Estelle [132], LOTOS [133]. 16. The formalism of Statecharts, introduced by D. Harel [134], [135] and used in the design of the language UML. 17. A model of communicating extended timed automata CETA [136] - [140]. 18. A Calculus of Broadcasting Systems [17], [18].

10.9

Business Pro cesses

1. BPEL (Business process execution language) [141]. 2. BPML (Business Process Modeling Language) [16], [142]. 3. The article "Does Better Math Lead to Better Business Processes?" [143]. 4. The web-page " -calculus and Business Process Management" [144]. 5. The paper "Workflow is just a -process", Howard Smith and Peter Fingar, October 2003 [145]. 6. "Third wave" in the modeling of business processes: [146], [147]. 7. The paper "Composition of executable business process models by combining business rules and process flows" [148]. 8. Web services choreography description language [149].

273


Bibliography
[1] Web-page of R. Milner http://www.cl.cam.ac.uk/~rm135/ [2] Web-page of R. Milner in the Wikipedia. http://en.wikipedia.org/wiki/Robin_Milner [3] An interview of R. Milner. http://www.dcs.qmul.ac.uk/~martinb/ interviews/milner/ [4] http://www.fairdene.com/picalculus/ robinmilner.html [5] http://www.cs.unibo.it/gorrieri/ icalp97/Lauree_milner.html [6] R. Milner: Computing in Space. May, 2002. http://www.fairdene.com/picalculus/ milner-computing-in-space.pdf [7] R. Milner: Turing, Computing and Communication. King's College, October 1997. http://www.fairdene.com/picalculus/ milner-infomatics.pdf [8] The -calculus, a tutorial. http://www.fairdene.com/picalculus/ pi-c-tutorial.pdf [9] J. Parrow: An introduction to the -calculus. [42], p. 479-543.

274


[10] D. Sangiorgi and D. Walker: The -calculus: A Theory of Mobile Processes. ISBN 0521781779. http://us.cambridge.org/titles/ catalogue.asp?isbn=0521781779 [11] S. Dal Zilio: Mobile Processes: a Commented Bibliography. http://www.fairdene.com/picalculus/ mobile-processes-bibliography.pdf [12] M. Abadi and A. D. Gordon: A calculus for cryptographic protocols: The Spi calculus. Journal of Information and Computation, 143:1-70, 1999. [13] The site "Process calculus". http://en.wikipedia.org/wiki/Process_calculus [14] The site about the -calculus. http://en.wikipedia.org/wiki/Pi-calculus [15] J.C.M. Baeten: A brief history of process algebra, Rapport CSR 04-02, Vakgroep Informatica, Technische Universiteit Eindhoven, 2004 http://www.win.tue.nl/fm/0402history.pdf [16] Business Process Modeling Language http://en.wikipedia.org/wiki/BPML [17] http://en.wikipedia.org/wiki/ Calculus_of_Broadcasting_Systems [18] K. V. S. Prasad: A Calculus of Broadcasting Systems, Science of Computer Programming, 25, 1995. [19] L. Aceto: Some of my favorite results in classic process algebra. Technical Report NS-03-2, BRICS, 2003. ´ [20] L. Aceto, Z.T. Esik, W.J. Fokkink, and A. Ing´ olfsd´ ottir (editors): Process Algebra: Open Problems and Future Directions. BRICS Notes Series NS-03-3, 2003. [21] L. Aceto, W.J. Fokkink, and C. Verho ef: Structural operational semantics. In [42], pp. 197­292, 2001.

275


[22] R. Alur, C. Courcoub etis, N. Halbwachs, T.A. Henzinger, P.-H. Ho, X. Nicollin, A. Olivero, J. Sifakis, and S. Yovine: The algorithmic analysis of hybrid systems. Theoretical Computer Science, 138:3­34, 1995. [23] S. Andova: Probabilistic Process Algebra. PhD thesis, Technische Universiteit Eindhoven, 2002. [24] K.R. Apt, N. Francez, and W.P. de Ro ever: A proof system for communicating sequential processes. TOPLAS, 2:359­385, 1980. [25] D. Austry and G. Boudol: Alg´ ebre de processus et synchronisation. Theoretical Computer Science, 30:91­131, 1984. [26] J.C.M. Baeten: The total order assumption. In S. Purushothaman and A. Zwarico, editors, Proceedings First North American Process Algebra Workshop, Workshops in Computing, pages 231­240. Springer Verlag, 1993. [27] J.C.M. Baeten: Over 30 years of process algebra: Past, present and future. ´ In L. Aceto, Z. T. Esik, W.J. Fokkink, and A. Ing´ olfsd´ ottir, editors, Process Algebra: Open Problems and Future Directions, volume NS-03-3 of BRICS Notes Series, pages 7­12, 2003. [28] http://www.win.tue.nl/fm/pubbaeten.html [29] J.C.M. Baeten and J.A. Bergstra: Real time process algebra. Formal Aspects of Computing, 3(2):142­188, 1991. [30] J.C.M. Baeten, J.A. Bergstra, C.A.R. Hoare, R. Milner, J. Parrow, and R. de Simone: The variety of process algebra. Deliverable ESPRIT Basic Research Action 3006, CONCUR, 1991. [31] J.C.M. Baeten, J.A. Bergstra, and S.A. Smolka: Axiomatizing probabilistic processes: ACP with generative probabilities. Information and Computation, 121(2):234­255, 1995. [32] J.C.M. Baeten and C.A. Middelburg: Process Algebra with Timing. EATCS Monographs. Springer Verlag, 2002. [33] J.C.M. Baeten, C.A. Middelburg, and M.A. Reniers: A new equivalence for processes with timing. Technical Report CSR 02-10, Eindhoven University of Technology, Computer Science Department, 2002. [34] J.C.M. Baeten and W.P. Weijland: Process Algebra. Number 18 in Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1990.

276


[35] J.W. de Bakker and J.I. Zucker: Denotational semantics of concurrency. In Proceedings 14th Symposium on Theory of Computing, pages 153­158. ACM, 1982. [36] J.W. de Bakker and J.I. Zucker: Processes and the denotational semantics of concurrency. Information and Control, 54:70­120, 1982. [37] J.A. Bergstra and J.W. Klop: Fixed point semantics in process algebra. Technical Report IW 208, Mathematical Centre, Amsterdam, 1982. [38] J.A. Bergstra and J.W. Klop: The algebra of recursively defined processes and the algebra of regular processes. In J. Paredaens, editor, Proceedings 11th ICALP, number 172 in LNCS, pages 82­95. Springer Verlag, 1984. [39] J.A. Bergstra and J.W. Klop: Process algebra for synchronous communication. Information and Control, 60(1/3):109­137, 1984. [40] J.A. Bergstra and J.W. Klop: A convergence theorem in process algebra. In J.W. de Bakker and J.J.M.M. Rutten, editors, Ten Years of Concurrency Semantics, pages 164­195. World Scientific, 1992. [41] J.A. Bergstra and C.A. Middelburg: Process algebra semantics for hybrid systems. Technical Report CS-R 03/06, Technische Universiteit Eindhoven, Dept. of Comp. Sci., 2003. [42] J.A. Bergstra, A. Ponse, and S.A. Smolka, editors: Handbook of Process Algebra. North-Holland, Amsterdam, 2001. [43] M. Bernardo and R. Gorrieri: A tutorial on EMPA: A theory of concurrent processes with non-determinism, priorities, probabilities and time. Theoretical Computer Science, 202:1­54, 1998. [44] E. Best, R. Devillers, and M. Koutny: A unified model for nets and process algebras. In [42], pp. 945­1045, 2001. [45] E. Brinksma (editor): Information Processing Systems, Open Systems Interconnection, LOTOS ­ A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour, volume IS-8807 of International Standard. ISO, Geneva, 1989. [46] S.D. Bro okes, C.A.R. Hoare, and A.W. Rosco e: A theory of communicating sequential processes. Journal of the ACM, 31(3):560­599, 1984. [47] O. Burkart, D. Caucal, F. Moller, and B. Steffen: Verification on infinite structures. In [42], pp. 545­623, 2001.

277


[48] L. Cardelli and A.D. Gordon: Mobile ambients. Theoretical Computer Science, 240:177­213, 2000. [49] P.J.L. Cuijp ers and M.A. Reniers: Hybrid process algebra. Technical Report CS-R 03/07, Technische Universiteit Eindhoven, Dept. of Comp. Sci., 2003. [50] P.R. D'Argenio: Algebras and Automata for Timed and Stochastic Systems. PhD thesis, University of Twente, 1999. [51] J. Desharnais, V. Gupta, R. Jagadeesan, and P. Panangaden: Metrics for labeled Markov systems. In J.C.M. Baeten and S. Mauw, editors, Proceedings CONCUR'99, number 1664 in Lecture Notes in Computer Science, pages 258­273. Springer Verlag, 1999. [52] E.W. Dijkstra: Guarded commands, nondeterminacy, and formal derivation of programs. Communications of the ACM, 18(8):453­ 457, 1975. [53] U. Engb erg and M. Nielsen: A calculus of communicating systems with label passing. Technical Report DAIMI PB-208, Aarhus University, 1986. [54] J.-C. Fernandez, H. Garavel, A. Kerbrat, R. Mateescu, L. Mounier, and M. Sighireanu: CADP (CAESAR/ALDEBARAN development package): A protocol validation and verification toolbox. In R. Alur and T.A. Henzinger, editors, Proceedings CAV '96, number 1102 in Lecture Notes in Computer Science, pages 437­440. Springer Verlag, 1996. [55] R.W. Floyd: Assigning meanings to programs. In J.T. Schwartz, editor, Proceedings Symposium in Applied Mathematics, Mathematical Aspects of Computer Science, pages 19­32. AMS, 1967. [56] R.J. van Glabb eek: The linear time ­ branching time spectrum II; the semantics of sequential systems with silent moves. In E. Best, editor, Proceedings CONCUR '93, number 715 in Lecture Notes in Computer Science, pages 66­81. Springer Verlag, 1993. [57] R.J. van Glabb eek: What is branching time semantics and why to use it? In M. Nielsen, editor, The Concurrency Column, pages 190­198. Bulletin of the EATCS 53, 1994. [58] R.J. van Glabb eek: The linear time ­ branching time spectrum I. The semantics of concrete, sequential processes. In [42], pp. 3­100, 2001. [59] R.J. van Glabb eek and W.P. Weijland: Branching time and abstraction in bisimulation semantics. Journal of the ACM, 43:555­600, 1996.

278


[60] N. G¨ otz, U. Herzog, and M. Rettelbach: Multiprocessor and distributed system design: The integration of functional specification and performance analysis using stochastic process algebras. In L. Donatiello and R. Nelson, editors, Performance Evaluation of Computer and Communication Systems, number 729 in LNCS, pages 121­146. Springer, 1993. [61] J.F. Gro ote: Process Algebra and Structured Operational Semantics. PhD thesis, University of Amsterdam, 1991. [62] J.F. Gro ote and B. Lisser: Computer assisted manipulation of algebraic process specifications. Technical Report SEN-R0117, CWI, Amsterdam, 2001. [63] J.F. Gro ote and M.A. Reniers: Algebraic process verification. In [42], pp. 1151­1208, 2001. [64] H. Hansson: Time and Probability in Formal Design of Distributed Systems. PhD thesis, University of Uppsala, 1991. [65] M. Hennessy: Algebraic Theory of Processes. MIT Press, 1988. [66] M. Hennessy and R. Milner: On observing nondeterminism and concurrency. In J.W. de Bakker and J. van Leeuwen, editors, Proceedings 7th ICALP, number 85 in Lecture Notes in Computer Science, pages 299­ 309. Springer Verlag, 1980. [67] M. Hennessy and G.D. Plotkin: Full abstraction for a simple parallel programming language. In J. Becvar, editor, Proceedings MFCS, number 74 in LNCS, pages 108­ 120. Springer Verlag, 1979. [68] T.A. Henzinger, P. Ho, and H. Wong-Toi: Hy-Tech: The next generation. In Proceedings RTSS, pages 56­65. IEEE, 1995. [69] J. Hillston: A Compositional Approach to Performance Modelling. PhD thesis, Cambridge University Press, 1996. [70] C.A.R. Hoare: An axiomatic basis for computer programming. Communications of the ACM, 12:576­580, 1969. [71] C.A.R. Hoare: Communicating sequential processes. Communications of the ACM, 21(8):666­677, 1978. [72] C.A.R. Hoare: A model for communicating sequential processes. In R.M. McKeag and A.M. Macnaghten, editors, On the Construction of Programs, pages 229­254. Cambridge University Press, 1980.

279


[73] C.A.R. Hoare: Communicating Sequential Processes. Prentice Hall, 1985. [74] K.G. Larsen, P. Pettersson, and Wang Yi: Uppaal in a nutshell. Journal of Software Tools for Technology Transfer, 1, 1997. [75] P. Linz: An Introduction to Formal Languages and Automata. Jones and Bartlett, 2001. [76] G. Lowe: Probabilities and Priorities in Timed CSP. PhD thesis, University of Oxford, 1993. [77] N. Lynch, R. Segala, F. Vaandrager, and H.B. Weinb erg: Hybrid I/O automata. In T. Henzinger, R. Alur, and E. Sontag, editors, Hybrid Systems III, number 1066 in Lecture Notes in Computer Science. Springer Verlag, 1995. [78] S. MacLane and G. Birkhoff: Algebra. MacMillan, 1967. [79] S. Mauw: PSF: a Process Specification Formalism. PhD thesis, University of Amsterdam, 1991. http://carol.science.uva.nl/~psf/ [80] A. Mazurkiewicz: Concurrent program schemes and their interpretations. Technical Report DAIMI PB-78, Aarhus University, 1977. [81] J. McCarthy: A basis for a mathematical theory of computation. In P. Braffort and D. Hirshberg, editors, Computer Programming and Formal Systems, pages 33­70. North-Holland, Amsterdam, 1963. [82] G.J. Milne: CIRCAL: A calculus for circuit description. Integration, 1:121­ 160, 1983. [83] G.J. Milne and R. Milner: Concurrent processes and their syntax. Journal of the ACM, 26(2):302­321, 1979. [84] R. Milner: An approach to the semantics of parallel programs. In Proceedings Convegno di informatica Teoretica, pages 285­ 301, Pisa, 1973. Instituto di Elaborazione della Informazione. [85] R. Milner: Processes: A mathematical model of computing agents. In H.E. Rose and J.C. Shepherdson, editors, Proceedings Logic Colloquium, number 80 in Studies in Logic and the Foundations of Mathematics, pages 157­174. North-Holland, 1975.

280


[86] R. Milner: Algebras for communicating systems. In Proc. AFCET/SMF joint colloquium in Applied Mathematics, Paris, 1978. [87] R. Milner: Synthesis of communicating behaviour. In J. Winkowski, editor, Proc. 7th MFCS, number 64 in LNCS, pages 71­83, Zakopane, 1978. Springer Verlag. [88] R. Milner: Flowgraphs and flow algebras. Journal of the ACM, 26(4):794­ 818, 1979. [89] R. Milner: A Calculus of Communicating Systems. Number 92 in Lecture Notes in Computer Science. Springer Verlag, 1980. [90] R. Milner: Calculi for synchrony and asynchrony. Theoretical Computer Science, 25:267­310, 1983. [91] R. Milner: A complete inference system for a class of regular behaviours. Journal of Computer System Science, 28:439­466, 1984. [92] R. Milner: Communication and Concurrency. Prentice Hall, 1989. [93] R. Milner: Calculi for interaction. Acta Informatica, 33:707­737, 1996. [94] R. Milner: Communicating and Mobile Systems: the -Calculus. Cambridge University Press, ISBN 052164320, 1999. http://www.cup.org/titles/ catalogue.asp?isbn=0521658691 [95] R. Milner: Bigraphical reactive systems. In K.G. Larsen and M. Nielsen, editors, Proceedings CONCUR '01, number 2154 in LNCS, pages 16­35. Springer Verlag, 2001. [96] O. Jensen and R. Milner Bigraphs and Mobile Processes. Technical report, 570, Computer Laboratory, University of Cambridge, 2003. http://citeseer.ist.psu.edu/ jensen03bigraphs.html http://citeseer.ist.psu.edu/668823.html [97] R. Milner, J. lParrow, and D. Walker: A calculus of mobile processes. Information and Computation, 100:1­77, 1992. [98] F. Moller and P. Stevens: Edinburgh Concurrency Workbench user manual (version 7.1). http://www.dcs.ed.ac.uk/home/cwb/

281


[99] F. Moller and C. Tofts: A temporal calculus of communicating systems. In J.C.M. Baeten and J.W. Klop, editors, Proceedings CONCUR'90, number 458 in LNCS, pages 401­415. Springer Verlag, 1990. [100] X. Nicollin and J. Sifakis: The algebra of timed processes ATP: Theory and application. Information and Computation, 114:131­ 178, 1994. [101] D.M.R. Park: Concurrency and automata on infinite sequences. In P. Deussen, editor, Proceedings 5th GI Conference, number 104 in LNCS, pages 167­183. Springer Verlag, 1981. [102] C.A. Petri: Kommunikation mit Automaten. PhD thesis, Institut fuer Instrumentelle Mathematik, Bonn, 1962. [103] C.A. Petri: Introduction to general net theory. In W. Brauer, editor, Proc. Advanced Course on General Net Theory, Processes and Systems, number 84 in LNCS, pages 1­20. Springer Verlag, 1980. [104] G.D. Plotkin: A structural approach to operational semantics. Technical Report DAIMI FN-19, Aarhus University, 1981. [105] G.D. Plotkin: The origins of structural operational semantics. Journal of Logic and Algebraic Programming, Special Issue on Structural Operational Semantics, 2004. [106] A. Pnueli: The temporal logic of programs. In Proceedings 19th Symposium on Foundations of Computer Science, pages 46­57. IEEE, 1977. [107] G.M. Reed and A.W. Rosco e: A timed model for communicating sequential processes. Theoretical Computer Science, 58:249­261, 1988. [108] M. Rem: Partially ordered computations, with applications to VLSI design. In J.W. de Bakker and J. van Leeuwen, editors, Foundations of Computer Science IV, volume 159 of Mathematical Centre Tracts, pages 1­44. Mathematical Centre, Amsterdam, 1983. [109] S.A. Schneider: Concurrent and Real-Time Systems (the CSP Approach). Worldwide Series in Computer Science. Wiley, 2000. [110] D.S. Scott and C. Strachey: Towards a mathematical semantics for computer languages. In J. Fox, editor, Proceedings Symposium Computers and Automata, pages 19­46. Polytechnic Institute of Brooklyn Press, 1971. [111] Y.S. Usenko: Linearization in µCRL. PhD thesis, Technische Universiteit Eindhoven, 2002.

282


[112] B. Victor: A Verification Tool for the Polyadic -Calculus. Licentiate thesis, Department of Computer Systems, Uppsala University, Sweden, May 1994. Report DoCS 94/50. [113] T.A.C. Willemse: Semantics and Verification in Process Algebras with Data and Timing. PhD thesis, Technische Universiteit Eindhoven, 2003. [114] Wang Yi: Real-time behaviour of asynchronous agents. In J.C.M. Baeten and J.W. Klop, editors, Proceedings CONCUR'90, number 458 in LNCS, pages 502­ 520. Springer Verlag, 1990. [115] L. Wischik: New directions in implementing the -calculus. University of Bologna, August 2002. http://www.fairdene.com/picalculus/ implementing-pi-c.pdf [116] S. Yovine: Kronos: A verification tool for real-time systems. Journal of Software Tools for Technology Transfer, 1:123­133, 1997. [117] D. Zhang, R. Cleaveland, and E. Stark: The integrated CWBNC/PIOAtool for functional verification and performance analysis of concurrent systems. In H. Garavel and J. Hatcliff, editors, Proceedings TACAS'03, number 2619 in Lecture Notes in Computer Science, pages 431­436. SpringerVerlag, 2003. [118] E. Clarke, O. Grumb erg, D. Peled: Model checking. MIT Press, 2001. [119] J.Esparza: Decidability of model-checking for infinite-state concurrent systems, Acta Informatica, 34:85-107, 1997. [120] P.A.Ab dulla, A.Annichini, S.Bensalem, A.Boua jjani, P.Hab ermehl, Y.Lakhnech: Verification of Infinite-State Systems by Combining Abstraction and Reachability Analysis, Lecture Notes in Computer Science 1633, pages 146-159, Springer-Verlag, 1999. [121] K.L.McMillan: Verification of Infinite State Systems by Compositional Model Checking, Conference on Correct Hardware Design and Verification Methods, pages 219-234, 1999. [122] O.Burkart, D.Caucal, F.Moller, and B.Steffen: Verification on infinite structures, In J. Bergstra, A. Ponse and S. Smolka, editors, Handbook of Process Algebra, chapter 9, pages 545-623, Elsevier Science, 2001.

283


[123] D. Lee and M. Yannakakis. Principles and Methods of Testing Finite State Machines - a Survey. The Proceedings of the IEEE, 84(8), pp 10901123, 1996. [124] G. Holzmann. Design and Validation of Computer Protocols. PrenticeHall, Englewood Cliffs, N.J., first edition, 1991. [125] G. Holzmann. The SPIN Model Checker - Primer and Reference Manual. Addison-Wesley, 2003. [126] S. Huang, D. Lee, and M. Staskauskas. Validation-Based Test Sequence Generation for Networks of Extended Finite State Machines. In Proceedings of FORTE/PSTV, October 1996. [127] J. J. Li and M. Segal. Abstracting Security Specifications in Building Survivable Systems. In Proceedings of 22-th National Information Systems Security Conference, October 1999, Arlington, Virginia, USA. [128] Y.- J. Byun, B. A. Sanders, and C.-S. Keum. Design Patterns of Communicating Extended Finite State Machines in SDL . In Proceedings of 8-th Conference on Pattern Languages of Programs (PLoP'2001), September 2001, Monticello, Illinois, USA. [129] J. J. Li and W. E. Wong. Automatic Test Generation from Communicating Extended Finite State Machine (CEFSM)-Based Models. In Proceedings of the Fifth IEEE International Symposium on Ob ject-Oriented Real-Time Distributed Computing (ISORC'02), pp. 181-185, 2002. [130] S. Chatterjee. EAI Testing Automation Strategy. In Proceedings of 4-th QAI Annual International Software Testing Conference in India, Pune, India, February 2004 [131] ITU Telecommunication Standardization Sector (ITU-T), Recommendation Z.100, CCITT Specification and Description Language (SDL), Geneva 1994. [132] Information Processing Systems - Open Systems Interconnection: Estelle, A Formal Description Technique Based on Extended State Transition Model, ISO International Standard 9074, June 1989. [133] ISOIEC. LOTOS - A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour. International Standard 8807, 1988. [134] D. Harel. A Visual Formalism for Complex Systems. Science of Computer Programming, 8:231-274, 1987.

284


[135] D. Harel and A. Naamad. The STATEMATE semantics of statecharts. ACM Transactions on Software Engineering and Methodology (Also available as technical report of Weizmann Institute of Science, CS95-31), 5(4):293-333, Oct 1996. [136] M. Bozga, J. C. Fernandez, L. Ghirvu, S. Graf, J. P. Krimm, and L. Mounier. IF: An intermediate representation and validation environment for timed asynchronous systems. In Proceedings of Symposium on Formal Methods 99, Toulouse, number 1708 in LNCS. Springer Verlag, September 1999. [137] M. Bozga, S. Graf, and L. Mounier. IF-2.0: A validation environment for componentbased real-time systems. In Proceedings of Conference on Computer Aided Verification, CAV'02, Copenhagen, LNCS. Springer Verlag, June 2002. [138] M. Bozga, D. Lesens, and L. Mounier. Model-Checking Ariane-5 Flight Program. In Proceedings of FMICS'01, Paris, France, pages 211-227. INRIA, 2001. [139] M. Bozga, S. Graf, and L. Mounier. Automated validation of distributed software using the IF environment. In 2001 IEEE International Symposium on Network Computing and Applications (NCA 2001). IEEE, October 2001. [140] M. Bozga and Y. Lakhnech. IF-2.0 common language operational semantics. Technical report, 2002. Deliverable of the IST Advance pro ject, available from the authors. [141] http://www-128.ibm.com/developerworks/ library/specification/ws-bpel/ [142] http://www.bpml.org [143] http://www.wfmc.org/standards/docs/ better_maths_better_processes.pdf [144] http://www.fairdene.com/picalculus/ [145] http://www.bpmi.org/bpmi-library/ 2B6EA45491.workflow-is-just-a-pi-process.pdf [146] http://www.fairdene.com/picalculus/ bpm3-apx-theory.pdf

285


[147] http://www.bpm3.com [148] http://portal.acm.org/citation.cfm?id=1223649 [149] http://www.w3.org/TR/ws-cdl-10/

286