Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://halgebra.math.msu.su/wiki/lib/exe/fetch.php/staff:yjabr12829.pdf
Äàòà èçìåíåíèÿ: Wed Feb 13 11:26:38 2013
Äàòà èíäåêñèðîâàíèÿ: Sun Apr 10 00:07:44 2016
Êîäèðîâêà:
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright


Author's personal copy
Journal of Algebra 323 (2010) 2270­2289

Contents lists available at ScienceDirect

Journal of Algebra
www.elsevier.com/locate/jalgebra

Automorphisms of Chevalley groups of type F 4 over local rings with 1/2
E.I. Bunina
M.V. Lomonosov Moscow State University, Leninskie Gory, Main Building of MSU, Faculty of Mechanics and Mathematics, Department of Higher Algebra, Moscow 119992, Russia

article

info

abstract
In the given paper we prove that every automorphism of a Chevalley group of type F 4 over a commutative local ring with 1/2 is standard, i.e., it is a composition of ring and inner automorphisms. © 2010 Elsevier Inc. All rights reserved.

Article history: Received 6 August 2009 Available online 8 February 2010 Communicated by Efim Zelmanov Keywords: Chevalley groups Local rings Standard automorphisms

Introduction An associative commutative ring R with a unit is called local, if it contains exactly one maximal ideal (that coincides with the radical of R ). Equivalently, the set of all non-invertible elements of R is an ideal. We describe automorphisms of Chevalley groups of type F 4 over local rings with 1/2. Note that for the root system F 4 there exists only one weight lattice, that is simultaneously universal and adjoint, therefore for every ring R there exists a unique Chevalley group of type F 4 , that is G ( R ) = G ad ( F 4 , R ). Over local rings universal Chevalley groups coincide with their elementary subgroups, consequently the Chevalley group G ( R ) is also an elementary Chevalley group. Theorem 1 for the root systems A l , D l , and E l was obtained by the author in [5], in [7] all automorphisms of Chevalley groups of given types over local rings with 1/2 were described. Theorem 1 for the root systems B 2 and G 2 is proved in [6].

The work is supported by the Russian President grant MK-2530.2008.1 and by the grant of Russian Fond of Basic Research 08-01-00693. E-mail address: helenbunina@yandex.ru. 0021-8693/$ ­ see front matter © 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.jalgebra.2009.12.034




Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2271

Similar results for Chevalley groups over fields were proved by R. Steinberg [25] for the finite case and by J. Humphreys [18] for the infinite case. Many papers were devoted to description of automorphisms of Chevalley groups over different commutative rings, we can mention here the papers of Borel and Tits [4], Carter and Chen Yu [10], Chen Yu [11­15], A. Klyachko [21]. E. Abe [1] proved that all automorphisms of Chevalley groups under Noetherian rings with 1/2 are standard. The case A l was completely studied by the papers of W.C. Waterhouse [27], V.M. Petechuk [22], Fuan Li and Zunxian Li [20], and also for rings without 1/2. The paper of I.Z. Golubchik and A.V. Mikhalev [16] covers the case C l , that is not considered in the present paper. Automorphisms and isomorphisms of general linear groups over arbitrary associative rings were described by E.I. Zelmanov in [28] and by I.Z. Golubchik, A.V. Mikhalev in [17]. We generalize some methods of V.M. Petechuk [23] to prove Theorem 1. 1. Definitions and main theorems We fix the root system of the type F 4 (detailed texts about root systems and their properties can be found in the books [19,8]). Let e 1 , e 2 , e 3 , e 4 be an orthonorm basis of the space R4 . Then we numerate the roots of F 4 as follows:

1 = e2 - e3 ,
are simple roots;

2 = e3 - e4 ,

3 = e4 ,

4 = (e1 - e2 - e3 - e4 )
2

1

5 = 1 + 2 = e2 - e4 , 6 = 2 + 3 = e3 , 7 = 3 + 4 = (e1 - e2 - e3 + e4 ),
2 1

8 = 1 + 2 + 3 = e2 , 9 = 2 + 3 + 4 = (e1 - e2 + e3 - e4 ), 10 = 2 + 23
2 = e3 + e4 , 1 1

11 = 1 + 2 + 3 + 4 = (e1 + e2 - e3 - e4 ), 12 = 1 + 2 + 23
2 = e2 + e4 , 1

13 = 2 + 23 + 4 = (e1 - e2 + e3 + e4 ), 14 = 1 + 22 + 23
2 = e2 + e3 , 1 2

15 = 1 + 2 + 23 + 4 = (e1 + e2 - e3 + e4 ), 16 = 2 + 23 + 24 = e1 - e2 , 17 = 1 + 22 + 23 + 4 = (e1 + e2 + e3 - e4 ), 18 = 1 + 2 + 23 + 24
2 = e1 - e3 , 1 1

19 = 1 + 22 + 33 + 4 = (e1 + e2 + e3 + e4 ), 20 = 1 + 22 + 23 + 24
2 = e1 - e4 ,

21 = 1 + 22 + 33 + 24 = e1 ,


Author's personal copy
2272 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

22 = 1 + 22 + 43 + 24 = e1 + e4 , 23 = 1 + 32 + 43 + 24 = e1 + e3 , 24 = 21 + 32 + 43 + 24 = e1 + e
2

are other positive roots. Suppose now that we have a semi-simple complex Lie algebra L of type F 4 with Cartan subalgebra H (detailed information about semi-simple Lie algebras can be found in the book [19]). Then in the algebra L we can choose a Chevalley basis {h i | i = 1,..., 4; x | } so that for every two elements of this basis their commutator is an integral linear combination of the elements of the same basis. Namely, (1) (2) (3) (4) (5) (6)

[h i , h j ]= 0; [h i , x ]= i , if = n 1 1 + if + , / if + , if + ,

··· + n4 4 , then [x , x- ]= n1 h1 + ··· + n4 h4 ; then [x , x ]= 0; and , are roots of the same length, then [x , x ]= cx + ; is a long root, is a short root, then [x , x ]= ax + + bx +2 .

x ;

Take now an arbitrary local ring with 1/2 and construct an elementary adjoint Chevalley group of type F 4 over this ring (see, for example [24]). For our convenience we briefly put here the construction. In the Chevalley basis of L all operators (x )k /k! for k N are written as integral (nilpotent) matrices. An integral matrix also can be considered as a matrix over an arbitrary commutative ring with 1. Let R be such a ring. Consider matrices 52 â 52 over R , matrices (x )k /k! for , k N are included in M 52 ( R ). Now consider automorphisms of the free module R n of the form

exp(tx ) = x (t ) = 1 + tx + t 2 (x )2 /2 + ··· + t k (x )k /k! + ··· .
Since all matrices x are nilpotent, we have that this series is finite. Automorphisms x (t ) are called elementary root elements. The subgroup in Aut( R n ), generated by all x (t ), , t R , is called an elementary adjoint Chevalley group (notation: E ad ( , R ) = E ad ( R )). In an elementary Chevalley group there are the following important elements: ­ w (t ) = x (t )x- (-t -1 )x (t ), ­ h (t ) = w (t ) w (1)-1 .

, t R;

The action of x (t ) on the Chevalley basis is described in [9,26], we write it below (see Section 3). Over local rings for the root system F 4 all Chevalley groups coincide with elementary adjoint Chevalley groups E ad ( R ), therefore we do not introduce Chevalley groups themselves in this paper. In this paper we denote our Chevalley groups by G ( R ), since they depend only of a ring R . We will work with two types of standard automorphisms of a Chevalley group G ( R ) and with one unusual, "temporary" type of automorphisms. Ring automorphisms. Let : R R be an ( (ai , j )) ((ai , j ) is a matrix) from G ( R ) onto denoted by the same letter and is called a and t R an element x (t ) is mapped automorphism of the ring R . The mapping (ai , j ) itself is an automorphism of the group G ( R ), that is ring automorphism of the group G ( R ). Note that for all to x ( (t )).


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2273

Inner automorphisms. Let g G ( R ) be an element gation of the group G ( R ) with the element g is an is called an inner automorphism of G ( R ). These two types of automorphisms are called phisms, which are also standard, but in our case we say that an automorphism of the group G ( R ) is automorphisms.

of a Chevalley group under consideration. Conjuautomorphism of G ( R ), that is denoted by i g and standard. There are central and graph automor(root system F 4 ) they cannot appear. Therefore standard, if it is a composition of ring and inner

Besides that, we need also to introduce temporarily one more type of automorphisms: Automorphisms­conjugations. Let V be a representation space of the Chevalley group G ( R ), C GL( V ) be a matrix from the normalizer of G ( R ):

CG ( R )C -1 = G ( R ).
Then the mapping x CxC -1 from G ( R ) onto itself is an automorphism of the Chevalley group, which is denoted by i and is called an automorphism­conjugation of G ( R ), induced by the element C of the group GL( V ). In Section 5 we will prove that in our case all automorphisms­conjugations are inner, but the first step is the proof of the following theorem: Theorem 1. Let G ( R ) be a Chevalley group of type F 4 , where R is a commutative local ring with 1/2. Then every automorphism of G ( R ) is a composition of a ring automorphism and an automorphism­conjugation. Sections 2­4 are devoted to the proof of Theorem 1. 2. Changing the initial automorphism to a special isomorphism, images of w
i

Since in the papers [5] and [6] the root system in there second sections was arbitrary, we can suppose all results of these sections to be proved also for our root system F 4 . Namely, by the fixed automorphism we can construct a mapping = i g -1 , which is an isomorphism of the group G ( R ) GLn ( R ) onto some subgroup of GLn ( R ) with the property that its image under factorization R by J (the radical of R ) coincides with a ring automorphism . Besides, from Section 2 of the same papers we know that the image of any involution (a matrix of order 2) under such an isomorphism is conjugate to this involution in the group GLn ( R ). These are the main facts that we need to know. The order of roots we have fixed in the previous section. The basis of the space V (52-dimensional) we numerate as v i = xi , v -i = x-i , V 1 = h1 , . . . , V 4 = h4 . Consider the matrices h1 (-1), ..., h4 (-1) in our basis. They have the form

h1 (-1) = diag[1, 1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1,

-1, -1, -1, -1, 1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1],
h2 (-1) = diag[-1, -1, 1, 1, -1, -1, 1, 1, -1, -1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1,

-1, -1, 1, 1, -1, -1, -1, -1, 1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1, 1, 1],


Author's personal copy
2274 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

h3 (-1) = diag[1, 1, 1, 1, 1, 1, -1, -1, 1, 1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1,

-1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
h4 (-1) = diag[1, 1, 1, 1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1].
As we see, for all i we have hi (-1)2 = 1. We know that every matrix h i = (hi (-1)) in some basis is diagonal with ±1 on its diagonal, and the number of 1 and -1 coincides with their number for the matrix hi (-1). Since all matrices h i commute, then there exists a basis, where all h i has the same form as hi (-1) in the initial basis from weight vectors. Suppose that we came to this basis with the help of the matrix g 1 . Clear that g 1 GLn ( R , J ) ={ X GLn ( R ) | X - E M n ( J )}. Consider the mapping 1 = i -11 . It is also an g isomorphism of the group G ( R ) onto some subgroup of GLn ( R ) such that its image under factorization R by J is , and 1 (hi (-1)) = hi (-1) for all i = 1,..., 4. Instead of we now consider the isomorphism 1 . Every element w i = w i (1) moves by conjugation h i to each other, therefore its image has a blockmonomial form. In particular, this image can be rewritten as a block-diagonal matrix, where the first block is 48 â 48, and the second is 4 â 4. Consider the first basis vector after the last basis change. Denote it by e . The Weil group W acts transitively on the set of roots of the same length, therefore for every root i of the same length as the first one, there exists such w (i ) W , that w (i ) 1 = i . Similarly, all roots of the second length are also conjugate under the action of W . Let k be the first root of the length that is not equal to the length of 1 , and let f be the k -th basis vector after the last basis change. If j is a root conjugate to k , then let us denote by w ( j ) an element of W such that w ( j ) k = j . Consider now the basis e 1 ,..., e 48 , e 49 ,..., e 52 , where e 1 = e , ek = f , for 1 < i 48 either e i = 1 ( w (i ) )e , or e i = 1 ( w (i ) ) f (it depends of the length of k ); for 48 < i 52 we do not move e i . Clear that the matrix of this basis change is equivalent to the unit modulo radical. Therefore the obtained set of vectors also is a basis. Clear that a matrix for 1 ( w i ) (i = 1,..., 4) in the basis part {e 1 ,..., e 2n } coincides with the matrix for w i in the initial basis of weight vectors. Since h i (-1) are squares of w i , then there images are not changed in the new basis. Besides, we know that every matrix 1 ( w i ) is block-diagonal up to decomposition of basis in the first 48 and last 4 elements. Therefore the last part of basis consisting of 4 elements, can be changed independently. Initially (in the basis of weight vectors) w i in this basis part are


w1:

0
0 0

-1 1 0 0



1 0 0 , 010 001 0 0 , 1 1

10 0 0 1 0 w3: 0 2 -1 00 0



10 1 -1 w2: 00 00



100 0 0 1 0 0 w4: . 001 0 0 0 1 -1



0 1 1 0

0 0 , 0 1





We have the following conditions for these elements (on the given basis part): (1) for all i w 2 = E ; i (2) w i and w j commute for |i - j | > 1; (3) w 1 w 2 and w 3 w 4 have order 3, w 2 w 3 has order 2. Therefore the images 1 ( w i ) satisfy the same conditions. Besides, we know, that these images are equivalent to the initial w i modulo radical J .


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2275

Let us make the basis change with the matrix, which is a product of (commuting with each other) matrices



1 1 /2 0 0 1 0 0 0 1 0 0 0

0 0 0 1


and



1000 0 1 0 0 . 0111 0002



In this basis w 1 = diag[-1, 1, 1, 1], w 3 = diag[1, 1, -1, 1],

1/2 1/4 -1/2 -1/2 1 1 1 1 /2 w2 = , -1 1/2 0 -1 0 0 0 1





1 0 0 0 0 1 0 0 w4 = . 0 -1/2 1/2 3/2 0 1/2 1/2 -1/2





Consider now the images of 1 ( w i ) in the changed basis. All these images are involutions, and every of them has exactly one -1 in its diagonal form, also 1 ( w 1 ) and 1 ( w 3 ) commute. Hence we can choose such a basis (equivalent to the previous one modulo J ), where 1 ( w 1 ) and 1 ( w 3 ) have a diagonal form with one -1 on the corresponding places. Consider now where w 4 can move under this basis change. Since 1 ( w 4 ) commutes with 1 ( w 1 ), has order two and is equivalent to w 4 modulo radical, we have

1

1 0 (w4) = 0 0



0 a d g

0 b e h

0 c . f i



Use the facts that

1 ( w 4 )2 = E , 1 ( w 3 w 4 ) has order 3. Then we obtain
ad + de + fg = 0, ad - de + fg =-d,

therefore 2de = d , and since d 1/2 mod J , we have e = 1/2. Moreover,

ag + dh + gi = 0, ag - dh + gi = g
consequently 2 g (a + i ) = g , i.e., a + i = 1/2. Make now a basis change with the matrix



1

0

0

0
a -1 g



0 1 0 0 0 1
0
This change does not move the elements

. 0
1

0

0

1 ( w 1 ) and 1 ( w 3 ), and 1 ( w 4 ) now has the form
0 1 d g 0 0 b c . f 1 /2 -1/2 h



1 0 0 0




Author's personal copy
2276 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

Using the above conditions, we obtain the equation bg + eh + hi = 0, consequently bg = 0, i.e., b = 0. In this case from a2 + bd + cg = 1 it follows c = 0. All other conditions gives the system

fg =-3/2d, dh =- g /2, fh =-1/4.
Clear that with a diagonal basis change (which does not move 1 ( w 1 ) and 1 ( w 3 )) we can come to a basis, where 1 ( w 4 ) has the same form as w 4 after the first our basis change. Making now the inverse basis change, we obtain that 1 ( w 1 ), 1 ( w 3 ) and ( w 4 ) have the same form as w 1 , w 3 , w 4 , respectively. Look at 1 ( w 2 ). Since 1 ( w 2 ) commutes with 1 ( w 4 ), we have

1

a d (w2) = g g /2



b e i i /2

c f h k

0 0 0 h - 2k

.

Since (h - 2k)2 = 1, we have h - 2k = 1. Now similarly to the consideration of 1 ( w 4 ), we take the conditions for 1 ( w 2 ). After suitable diagonal change we get 1 ( w i ) = w i in the new last basis. Therefore we can now come from the isomorphism 1 under consideration to an isomorphism 2 with all properties of 1 and such that 2 ( w i ) = w i for all i = 1,..., 4. We suppose now that an isomorphism 2 with all these properties is given. 3. Images of xi (1) and diagonal matrices Let us write the matrices w i , i = 1,..., 4:

w 1 =-e 1 ,-1 - e -1 ,1 + e 2 ,5 + e -2 ,-5 - e 5 ,2 - e -5 ,2 + e 3 ,

3

+ e -

3

,-

3

+ e 4 ,4 + e -
7

4

,-

4

+ e 6 ,8 + e -
9

6

,-

8

- e 8 ,6 - e -
11

8

,-

6

+ e 7 ,7 + e - + e -
10

,-

7

+ e 9 ,11 + e -
12

,-

11

- e 11 ,9 - e -
13

,-

9

+ e 10 ,

12

,-

12

- e 12 ,10 - e -
14

,-

10

+ e 13 ,15 + e -
16

,-

15

- e 15 ,13 - e -
18

15

,-

13

+ e 14 ,14 + e - + e 17 ,17 + e - + e 21 ,21 + e - - e 24 ,23 - e -

,- ,- ,- ,-

14

+ e 16 ,18 + e - + e 19 ,19 + e - + e 22 ,22 + e - -e
h1 ,h
1

,- ,- ,-

18

- e 18 ,16 - e - + e 20 ,20 + e -

,- ,-
23

16

17

17

19

19

20

20

21

21

22

22

++e 23 ,24 + e -
2

,-

24

24

23

+e

h1 ,h

2

+e

h2 ,h

+e

h3 ,h

3

+e

h4 ,h

4

;

w 2 =-e 2 ,-2 - e -2 ,2 + e 1 ,5 + e -1 ,-5 - e 5 ,1 - e -5 ,1 - e 3 ,6 - e -3 ,-

6

+ e 6 ,3 + e - - e -
9

6

,-

3

+ e 4 ,4 + e -
8

4

,-

4

+ e 7 ,9 + e -
10

7

,-

9

- e 9 ,

7

,-

7

+ e 8 ,8 + e -
12

,-

8

+ e 10 ,10 + e -
14

,-

10

+ e 11 ,11 + e -
13

11

,-

11

+ e 12 ,14 + e - + e 15 ,17 + e - + e 18 ,20 + e -

,- ,- ,-

14

- e 14 ,12 - e - - e 17 ,15 - e - - e 20 ,18 - e -

,- ,- ,-

12

+ e 13 ,13 + e - + e 16 ,16 + e - + e 19 ,19 + e -

,- ,- ,-

13

15

17

17

15

16

16

18

20

20

18

19

19


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2277

+ e 21 ,21 + e - + e 24 ,24 + e -

21

,- ,-

21

+ e 22 ,23 + e - +e
h1 ,h
1

22

,-

23

- e 23 ,22 - e -
2

23

,-

22

24

24

+e

h2 ,h

1

-e

h2 ,h

+e

h2 ,h

3

+e

h3 ,h

3

+e
,-
5

h4 ,h

4

;

w 3 = e 1 ,-1 + e -1 ,1 + e 2 ,

10

+ e -
7

2

,-

10

+ e 10 ,2 + e -10 ,2 - e
7

3

3

- e -3 ,3 + e 4 ,7 + e - + e 12 ,5 + e - + e -
9 12

4

,-

- e 7 ,4 - e -
6

,-

4

+ e 5 ,12 + e -
8

,-

12

,-

5

- e 6 ,6 - e -
13

,-

6

- e 8 ,8 - e -
11

,-

8

+ e 9 ,

13

,-

13

- e 13 ,9 - e -
14

,-

9

+ e 11 ,15 + e -
16

,-

15

- e 15 ,11 - e -
17

15

,-

11

+ e 14 ,14 + e - - e 19 ,17 - e - + e 22 ,20 + e - + e 24 ,24 + e -

,- ,- ,-

14

+ e 16 ,16 + e - + e 18 ,18 + e - - e 21 ,21 - e - +e
h1 ,h
1

,- ,- ,-

16

+ e 17 ,19 + e - + e 20 ,22 + e - + e 23 ,23 + e -
2

,- ,- ,-
4

19

19

17

18

18

20

22

22

20

21

21

23

23

24

,-

24

+e

h2 ,h

2

+ 2e

h3 ,h

-e

h3 ,h

3

+e

h3 ,h

+e

h4 ,h

4

;

w 4 = e 1 ,-1 + e -1 ,1 + e 2 ,-2 + e -2 ,2 - e 3 ,7 - e -3 ,-7 + e 7 ,3 + e -7 ,

3

- e

4

,-

4

- e -4 ,4 + e 5 ,5 + e -
8

5

,-
11

5

+ e 6 ,9 + e -
8

6

,-

9

- e 9 ,6 - e -
,-
16

9

,-

6

+ e 8 ,11 + e - + e -
16

,-

11

- e 11 ,8 - e -
12

,-

+ e 10 ,16 + e -
18

10

+ e 16 ,

10

,-

10

+ e 12 ,18 + e -
15

,-

18

+ e 18 ,12 + e -
17

,-

12

--e 13 ,13 - e -
14

13

,-

13

- e 15 ,15 - e - + e 20 ,14 + e - + e - +e
22

,- ,-

15

- e 17 ,17 - e - + e 19 ,21 + e -
23

,- ,-

17

+ e 14 ,20 + e - - e 21 ,19 - e -
24

,- ,-

20

20

14

19

21

21

19

+ e 22 ,

22

,-

22

+ e 23 ,23 + e -
2

,-
3

23

+ e 24 ,24 + e -
h4 ,h
4

,-

24

h1 ,h

1

+e

h2 ,h

+e

h3 ,h

3

+e

h4 ,h

-e

.

2 Besides that, x1 (t ) = E + tX 1 + t 2 X 1 /2, where

X 1 = 2e 1 ,h1 - e 1 ,h2 - eh1 ,-1 + e 5 ,2 - e -2 ,-5 + e 8 ,6 - e -6 ,-

8

+ e 11 ,9 - e -

9

,-

11

+ e 12 ,10 - e -

10

,-
23

12

+ e 15 ,13 - e -
24

13

,-

15

+ e 18 ,16 - e -
2 x3 (t ) = E + tX 3 + t 2 X 3 /2, where

16

,-

18

+ e 24 ,23 - e -

,-

;

X 3 =-2e 3 ,h2 + 2e 3 ,h3 - e 3 ,h4 - eh3 ,-3 + e 7 ,4 - e -4 ,-

7

+ e 13 ,9 - e -

9

,-

13

+ e 15 ,11 - e -

11

,-

15

+ e 19 ,17 - e - - 2e 8 ,5 + e -
5

17

,-
8

19

- 2e 6 ,2 + e -

2

,-

6

- e 10 ,6 + 2e -

6

,-
20

10

,-

- e 12 ,8 + 2e -8 ,12 - 2e 21 ,20 + e -

,-

21

- e 22 ,21 + 2e -

21

,-

22

.

We are interested in images of xi (t ). Let 2 (x1 (1)) = x1 = ( y i , j ). Since x1 commutes with all hi (-1), i = 1, 3, 4, and also with w 3 , w 4 , and w 14 , then by direct calculus we obtain:


Author's personal copy
2278 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

1. The matrix x1 can be decomposed into following eight diagonal blocks:

B 1 ={ v 1 , v -1 , v

14

,v

-14 , v 20 , v -20 , v 22 , v -22 , V 1 , V 2 , V 3 , V 4
10

}; };

B 2 ={ v 2 , v -2 , v 5 , v -5 , v B 3 ={ v 3 , v -3 , v B 4 ={ v 4 , v -4 , v
21 17

,v

-10 , v 16 , v -16 , v 18 , v -18 , v 23 , v -23 , v 24 , v -24

,v ,v

-21 -17

}; };

B 5 ={ v 6 , v -6 , v 8 , v -8 }; B 6 ={ v 7 , v -7 , v B 7 ={ v 9 , v -9 , v B 8 ={ v
13 19 11

,v ,v

-19 -11

}; }; }.

,v

-13 , v 15 , v -15

2. On the block B 1 the matrix x1 has the form

y y2 - y3 y3 - y3 y3 - y3 y3 -2 y 4 y4 1 y5 y6 - y7 y7 - y7 y7 - y7 y7 -2 y 8 y8 y9 y 10 y 11 y 12 - y 13 y 13 - y 13 y 13 -2 y 14 + 2 y 15 y 14 y 11 y 13 - y 13 y 13 - y 13 2 y 14 - 2 y 15 - y 14 + 2 y - y 9 - y 10 y 12 y9 y 10 - y 13 y 13 y 11 y 12 - y 13 y 13 2(- y 14 + y 15 ) y 14 - y - y y 13 - y 13 y 12 y 11 y 13 - y 13 2( y 14 - y 15 ) - y 14 + 2 y 10 9 y9 y 10 - y 13 y 13 - y 13 y 13 y 11 y 12 2(- y 14 + y 15 ) - y 14 + 2 y - y 9 - y 10 y 13 - y 13 y 13 - y 13 y 12 y 11 2( y 14 - y 15 ) - y 14 y 16 y 17 - y 18 y 18 - y 18 y 18 - y 18 y 18 y 19 - 2 y 20 y 20 0 0 0 0 0 0 0 0 0 y 20 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0

0 0 0
15

0

0

15 15

-y -y
y y

15 15

15 15

0 0 y
20

- y 15 y 15 y 15 y 15 . 0 0 0 0 0
y
20

0

0

3. On the block B 2 it is
y 21 y 29 y 32 y 24 - y 25 - y 25 - y 35 - y 28 - y 25 - y 25 - y 35 - y 28 - y 28 - y 35
y y
25 25

y 22 y 30 y 31 y 23 - y 26 - y 33 - y 34 - y 27 - y 26 - y 33 - y 34 - y 27 - y 27 - y 34 y 33 y 26

-y -y

23 31

-y -y

24 32

y 30 y 22 y 27 y 34 - y 33 - y 26 y 27 y 34 - y 33 - y 26 - y 26 - y 33 - y 34 - y 27

y 29 y 21 y 28 y 35 - y 25 - y 25 y 28 y 35 - y 25 - y 25 - y 25 - y 25 - y 35 - y 28

- - - -

y 25 y 25 y 35 y 28 y 21 y 29 y 32 y 24 - y 25 - y 25 - y 35 - y 28 - y 28 - y 35 y 25 y 25

- - - -

y 26 y 33 y 34 y 27 y 22 y 30 y 31 y 23 - y 26 - y 33 - y 34 - y 27 - y 27 - y 34 y 33 y 26

y 27 y 34 - y 33 - y 26 - y 23 - y 31 y 30 y 22 y 27 y 34 - y 33 - y 26 - y 26 - y 33 - y 34 - y 27

y 28 y 35 - y 25 - y 25 - y 24 - y 32 y 29 y 21 y 28 y 35 - y 25 - y 25 - y 25 - y 25 - y 35 - y 28

- - - - - - - -

y 25 y 25 y 35 y 28 y 25 y 25 y 35 y 28 y 21 y 29 y 32 y 24 - y 28 - y 35 y 25 y 25

- - - - - - - -

y 26 y 33 y 34 y 27 y 26 y 33 y 34 y 27 y 22 y 30 y 31 y 23 - y 27 - y 34 y 33 y 26

y 27 y 34 - y 33 - y 26 y 27 y 34 - y 33 - y 26 - y 23 - y 31 y 30 y 22 - y 26 - y 33 - y 34 - y 27

y 28 y 35 - y 25 - y 25 y 28 y 35 - y 25 - y 25 - y 24 - y 32 y 29 y 21 - y 25 - y 25 - y 35 - y 28

y 28 y 35 - y 25 - y 25 y 28 y 35 - y 25 - y 25 y 28 y 35 - y 25 - y 25 y 21 y 29 y 32 y 24

y 27 y 34 - y 33 - y 26 y 27 y 34 - y 33 - y 26 y 27 y 34 - y 33 - y 26 y 22 y 30 y 31 y 23

y 26 y 33 y 34 y 27 y 26 y 33 y 34 y 27 y 26 y 33 y 34 y 27 - y 23 - y 31 y 30 y 22

y 25 y 28 y 35 y 28 y 25 y 25 y 35 y 28 y 25 y 25 y 35 y 28 - y 24 - y 32 y 29 y 21

.

4. On the blocks B 3 , B 4 , B 6 it has the form



y 36 y 37 - y 38 - y 38

y 37 y 36 - y 38 - y 38

y y y y

38 38 36 37

y y y y

38 38 37 36

.


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2279

5. Finally, on the blocks B 5 , B 7 , B 8 it is



y 39 y 43 - y 46 - y 42
Let now

y 40 y 44 - y 45 - y 41

y y y y

41 45 44 39

y y y y

42 46 43 40

.

also for w 13 we have w 13 x4 w -1 = x-1 = h3 (-1)x4 h3 (-1), then by direct calculation we obtain: 13 4 1. The matrix x4 can be decomposed into following eight diagonal blocks:

2 (x4 (1)) = x4 = (zi, j ). Since x4 commutes with all hi (-1), i = 1, 2, 4, and w 1 , w 2 , and

B 1 ={ v 4 , v -4 , V 1 , V 2 , V 3 , V 4 }; B 2 ={ v 1 , v -1 , v B 3 ={ v 2 , v -2 , v B 4 ={ v 5 , v -5 , v
14 10 12

,v ,v ,v

-14 , v 17 , v -17 , v 20 , v -20 , v 22 , v -22 -10 , v 13 , v -13 , v 16 , v -16 , v 24 , v -24 -12 , v 15 , v -15 , v 18 , v -18 , v 23 , v -23

}; }; };

B 5 ={ v 6 , v -6 , v 9 , v -9 }; B 6 ={ v 3 , v -3 , v 7 , v -7 }; B 7 ={ v 8 , v -8 , v B 8 ={ v
19 11

,v

-11

}; }.

,v

-19 , v 21 , v -21

2. On the first block the matrix x4 has the form



z1 z4 0 0 0 z8

z2 z5 0 0 0 z9

0 0 z7 0 0 0

0 0 0 z7 0 0

z3 z6 0 0 z7 z10

-2 z -2 z

3 6

.
10

0 0 0 z7 - 2 z

3. On the second, third and fourth blocks it is



z11 z12 - z17 z24 - z31 - z31 - z24 z17 - z16 z16

z12 z11 z17 - z24 z31 z31 z24 - z17 z16 - z16

-z

13

-z

14

z13 z18 z25 z32 - z37 z30 z23 z13 - z13

z14 z19 z26 z33 - z36 z29 z22 z14 - z14

z15 - z15 z20 z27 z34 z35 - z28 - z21 - z15 z15

z15 - z15 z21 z28 z35 z34 - z27 - z20 - z15 z15

z14 - z14 z22 z29 z36 - z33 z26 z19 - z14 z14

z13 - z13 z23 z30 z37 - z32 z25 z18 - z13 z13

-z

16

z16 z17 - z24 z31 z31 z24 - z17 z11 z12

z16 - z16 - z17 z24 - z31 - z31 - z24 z17 z12 z11

.

4. On all other blocks x4 has the form



z38 z42 - z45 - z41

z39 z43 - z44 - z40

z z z z

40 44 43 39

z z z z

41 45 42 38

.


Author's personal copy
2280 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

Therefore, we have 85 variables y 1 ,..., y 40 , z1 ,..., z45 , where y 1 , y 6 , y 11 , y 20 , y 21 , y 30 , y 32 , y 36 , y 39 , y 44 , z1 , z5 , z7 , z11 , z18 , z26 , z28 , z30 , z38 , z34 , z43 , z45 are 1 modulo radical, y 2 , y 4 , y 17 , y 46 , z2 , z3 , z9 are -1 modulo radical, z32 is -2 modulo radical, all other variables are from radical. We apply step by step four basis changes, commuting with each other and with all matrices w i . These changes are represented by matrices C 1 , C 2 , C 3 , C 4 . Matrices C 1 and C 2 are block-diagonal, where first 24 blocks have the size 2 â 2, the last block is 4 â 4. On all 2 â 2 blocks, corresponding to short roots, the matrix C 1 is unit, on all 2 â 2 blocks, corresponding to long roots, it is

1

- y 16 / y

- y 16 / y
17

17

1

.

On the last block it is unit. Similarly, C 2 is unit on the blocks corresponding to long roots, and on the last block. On the blocks corresponding to the short roots, it is

1

- z8 / z

- z8 / z
9

9

1

.

Matrices C 3 and C 4 are diagonal, identical on the last 4 â 4 block, the matrix C 3 is identical on all places, corresponding to short root, and scalar with multiplier a on all places corresponding to long roots. In the contrary, the matrix C 4 , is identical on all places, corresponding to long roots, and is scalar with multiplier b on all places, corresponding to short roots. Since all these four matrices commutes with all w i , i = 1, 2, 3, 4, then after basis change with any of these matrices all conditions for elements x1 and x4 still hold. At the beginning we apply basis changes with the matrices C 1 and C 2 . After that new y 16 in the matrix x1 and z8 in the matrix x4 are equal to zero (for the convenience of notations we do not change names of variables). Then we choose a = -1/ y 17 (it is new y 17 ) and apply the third basis change. After it y 17 in the matrix x1 becomes to be -1. Clear that y 16 is still zero. Finally, apply the last basis change with b = -1/ z9 (where z9 is the last one, obtained after all previous changes). We have that y 16 , y 17 , z8 are not changed, and z9 is now -1. Now we can suppose that y 16 = 0, y 17 =-1, z8 = 0, z9 =-1, we have now just 81 variables. From the fact that x1 and x4 commute (cond. 1), it directly follows y 37 = y 38 = 0, y 36 = y 20 . From the condition h2 (-1)x1 h2 (-1)x1 = E (cond. 2, its position (52, 52)) follows that y 2 = 1, 20 consequently y 20 = 1. From the condition w 2 x1 w -1 x1 = x1 w 2 (1)x1 w 2 (1)-1 (cond. 3, the position (50, 10)) it follows 2 y 21 = 1, from its position (49, 10) it follows y 19 = 0. The condition w 2 w 3 w 2 x1 w -1 w 3 (1) w -1 x1 = x1 w 2 w 3 w 2 x1 w -1 w -1 w -1 (cond. 4, the position 2 2 2 3 2 (51, 52)) implies y 15 = 0. Again from cond. 3 (the position (18, 13)) we have y 46 ( y 45 + y 42 ) = 0, whence y 45 =- y 42 . From cond. 2 (the positions (11, 12) and (12, 11)) we obtain y 40 ( y 39 + y 44 ) = 0 and y 43 ( y 39 + y 44 ) = 0, therefore y 40 = y 43 = 0. After that in the same condition the position (12, 16) gives y 44 = y 39 . The position (12, 16) of cond. 3 now gives us y 46 ( y 39 - 1) = 0 y 39 = 1. In the condition h3 (-1)x4 h3 (-1)x4 = E (cond. 5) the position (8, 7) gives z4 = 0, the position (7, 7) gives z1 = 1; (51, 51) gives z7 = 1; In the condition w 3 x4 x-1 x4 = x4 w 3 x4 x-1 (cond. 6) the position (51, 5) gives z41 = 0, the position 3 3 (51, 6) gives z40 = 0, the position (52, 7) gives z39 = 0, the position (51, 8) gives z10 = 0, the position (52, 8) gives z38 = 1. Again from cond. 5 (positions (52, 52), (52, 8), (7, 8)) we obtain z6 = 0, z5 = 1, z2 = z3 . Returning to cond. 6, from (13, 51) we have z43 = 1, from (5, 51) we have z44 = 0, from (5, 14) we have z42 = 0, from (12, 17) we have z35 = 0, from (12, 18) we have z34 = 1, from (12, 19) -- z37 =- z31 , from (12, 20) -- z36 = z31 , from (9, 15) -- z20 =- z15 , and from (10, 15) -- z27 = z15 . The position (11, 22) of cond. 1 now gives us y 42 = 0, and the position (11, 11) of cond. 2 gives y 41 = 0.


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2281

Considering x1+2 = 2 (x1 +2 (1)) = w 2 x1 w -1 , x2 = (x2 (1)) = w 1 x1+2 w 1 and cond. 7: x1 x2 = 2 x1+2 x2 x1 (the position (6, 16)), we obtain y 46 =-1. Similarly, considering x3+4 = 2 (x3 +4 (1)) = w 3 x4 w -1 , x3 = (x3 (1)) = w 4 x3+4 w -1 , and 3 4 cond. 8: x3 x4 = x3+4 x4 x3 (applying positions (51, 14), (13, 52), (12, 11), (29, 9), (15, 35), (15, 36), (16, 36), (12, 19), (12, 20), (11, 25), (12, 26), (10, 30), (47, 11), (1, 2), (1, 1), (4, 4), (3, 4), (3, 18), (3, 17), (4, 17), (4, 3), (3, 3), (18, 3)), we obtain z45 = 1, z3 =-1, z31 = 0, z32 =-2, z14 = 0, z13 = 0, z30 = 1, z25 = 0, z26 = 1, z15 = 0, z28 = 1, z24 = 0, z16 = 0, z12 = 0, z11 = 1, z17 = 0, z19 = 0, z21 = 0, z22 = 0, z29 = 0, z23 = 0, z18 = 1, z33 = 0, respectively. Therefore we obtain that x4 = x4 (1). Directly from the first condition we now have y 3 = y 7 = y 27 = y 25 = y 34 = y 26 = y 33 = y 28 = y 35 = y 22 = y 24 = y 29 = y 31 = y 12 = y 13 = y 9 = y 10 = y 23 = y 18 = y 14 = 0, y 30 = y 32 = y 11 = 1. Finally, from cond. 3 we get y 5 = 0, y 6 = 1, y 1 = 1, y 8 = 0, y 4 =-1, from cond. 2 we get y 2 =-1. Now x1 = x1 (1), it is what we needed. Since all long (and all short) roots are conjugate under the action of Weil group, it means that 2 (x (1)) = x (1) for all . Consider now the matrix dt = 2 (h4 (t )). Lemma 1. The matrix dt is h4 (s) for some s R . Proof. Since the matrix dt commutes with h (-1) for all lowing diagonal blocks:

, then dt is decomposed to the fol-

D 1 ={ v 1 , v -1 , v D 2 ={ v 2 , v -2 , v D 3 ={ v 3 , v -3 }, D 5 ={ v 5 , v -5 , v D 6 ={ v 6 , v -6 }, D 8 ={ v 8 , v -8 }, D D D D
10 12 14 16

14 10

,v ,v

-14 , v 20 , v -20 , v 22 , v -22 -10 , v 16 , v -16 , v 24 , v -24

}, },

D 4 ={ v 4 , v -4 },
12

,v

-12 , v 18 , v -18 , v 23 , v -23

},

D 7 ={ v 7 , v -7 }, D 9 ={ v 9 , v -9 },

={ v 11 , v ={ v 15 , v ={ v 19 , v

-11 -15 -19

}, }, },

D D D

11 13 15

={ v 13 , v ={ v 17 , v ={ v 21 , v

-13 -17 -21

}, }, },

={ V 1 , V 2 , V 3 , V 4 }.
13

Using the fact that dt commutes with w 1 , w 2 , w the matrix dt has the form

and x1 , we obtain that on the blocks D 1 , D 2 , D

5



t1 0 0 0 0 0 0 0

0 t1 0 0 0 0 0 0

0 0 t8 0 t 11 0 0 0

0 0 0 t
10

0 t9 0 0

0 0 t9 0 t 10 0 0 0

0 0 0 t
11

0 t8 0 0

0 0 0 0 0 0 t 1 + 2t 0

13

0 0 0 0 0 0 0 t1

;


Author's personal copy
2282 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

on the blocks D 3 , D 6 , D 8 , D the block D 4 it is

14

it is diag[t 2 , t 3 ]; on the blocks D 7 , D 9 , D

10

,D

15

it is diag[t 3 , t 2 ], on

t t
on the blocks D
11

4 6

t t

5 7

;

,D

12

,D

13

it has the form diag[t 12 , t 12 ]; and on the last block it is



t1 0 0 0

0 t1 0 0

0 0 t1 t 13

0 0 0 t 1 - 2t

.
13

2 Using the condition w 4 dt w -1 dt = E , we obtain: from the position (1, 1) it follows t 1 = 1, con4 sequently t 1 = 1, from (52, 52) it follows (1 - 2t 13 )2 = 1, therefore t 13 = 0; (5, 5) implies t 3 = 1/t 2 ; (7, 8) implies t 7 (t 5 + t 6 ) = 0, whence t 6 = -t 5 ; from (24, 36) we have t 8 (t 9 + t 11 ) = 0, therefore 2 t 11 =-t 9 ; from (26, 26) we have t 12 = 1, and then t 12 = 1.

Finally, introduce 2 (h3 (t )) = w 4 w 3 dt w -1 w -1 , 2 (h6 (t )) = w 2 2 (h3 (t )) w -1 , 2 (h10 (t )) = 3 3 2 2 (h6 (t ))2 (h3 (t )). Since 2 (h10 (t )) commutes with x8 (1), we obtain (the position (9, 6)) that 2 t8 = t2 . Therefore, 2 (h4 (t )) = h4 (1/t 2 ), and the lemma is proved. 2 Clear, that this lemma holds also for images of all h (t ), 4. Images of xi (t ), proof of Theorem 1 We have shown that 2 (h (t )) = h (s), . Denote the mapping t s by : R R . Note that for t R 2 (x1 (t )) = 2 (h2 (t -1 )x1 (1)h2 (t )) = h2 (s-1 )x1 (1)h2 (s) = x1 (s). If t R , then t J , / . Then (x (t )) = (x (1)x (t )) = x (1)x ( (t )) = x (1 + (t )). Therei.e., t = 1 + t 1 , where t 1 R 21 21 11 1 1 1 1 1 fore if we extend the mapping to the whole R (by the formula (t ) := 1 + (t - 1), t R ), we obtain 2 (x1 (t )) = x1 ( (t )) for all t R . Clear that is injective, additive, and also multiplicative on all invertible elements. Since every element of R is a sum of two invertible elements, we have that is an isomorphism from the ring R onto some its subring R . Note that in this situation CG ( R )C -1 = G ( R ) for some matrix C GL( V ). Let us show that R = R . Denote matrix units by E ij . Lemma 2. The Chevalley group G ( R ) generates the matrix ring M n ( R ). Proof. The matrix (x1 (1) - 1)2 has a unique nonzero element -2 · E 12 . Multiplying it to suitable diagonal matrices, we can obtain an arbitrary matrix of the form · E 12 (since -2 R and R generates R ). Since the Weil group acts transitively on all roots of the same length, i.e., for every long root k there exists such w W , that w (1 ) = k , and then the matrix E 12 · w has the form E 1,2k , and the matrix w -1 · E 12 has the form E 2k-1,2 . Besides, with the help of the Weil group element, moving the first root to the opposite one, we can get the matrix unit E 2,1 . Taking now different combinations of the obtained elements, we can get an arbitrary element E ij , 1 i , j 48, indices i , j correspond to the numbers of long roots. The matrix (x4 (1) - 1)2 is -2 E 7,8 + 2 E 20,32 + 2 E 24,36 + 2 E 28,40 + 2 E 31,19 + 2 E 35,23 + 2 E 39,27 . All matrix units in this sum, except the first one, are already obtained, therefore we can subtract them and get E 7,8 . Similarly to the longs roots, using the fact that all short roots are also conjugate under the action of the Weil groups, we obtain all E ij , 1 i , j 48, indices i , j correspond to the short roots.

Now consider the condition w 3 dt w -1 = dt w 4 w 3 dt w -1 w -1 . Its position (13, 14) gives t 5 = 0, the 3 3 4 2 2 position (5, 5) gives t 4 = 1/t 2 , (6, 6) gives t 7 = t 2 ; (3, 19) gives t 9 = 0; (19, 19) gives t 10 = 1/t 8 .

.


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2283

Now subtract from the matrix x1 (1) - 1 suitable matrix units and obtain the matrix E 49,2 - 2 E 1,49 + E 1,50 . Multiplying it (from the right side) to E 2,i , 1 i 48, where i corresponds to a long i 48 for i corresponding to the long roots. Multiplying these last root, we obtain all E 49,i , 1 elements from the left side to w 2 , we obtain E 50,i , 1 i 48 for i , corresponding to the long roots; then by multiplying them from the left side to w 3 we obtain all E 51,i , 1 i 48 for i , corresponding j 52, where to the long roots, and, similarly, E 52,i . Therefore, now we have all E i , j , 49 i 52, 1 j correspond to the long roots. Then A = 1/8(h1 (-1) + E )...(h4 (-1) + E ) = E 49,49 + E 50,50 + E 51,51 + E 52,52 , B = A ( w 1 + ··· + w 4 ) A + 2 A = E 49,50 + E 50,49 + E 50,51 + 2 E 51,50 + E 51,52 + E 52,51 , C = B 2 - A = E 49,51 + 2 E 50,50 + E 50,52 + 2 E 51,49 + 2 E 51,51 + 2 E 52,50 , C 2 - B 2 = 2 E 52,50 . So we have E 52,50 and then all E i , j , 48 < i , j 52, therefore all E i , j , 1 i 48, 48 < j 52, where i corresponds to the long roots. Then, taking the matrix x4 (t ) and multiplying it from the left and right side to some suitable matrix units E i ,i , we can obtain E i , j , where i corresponds to the long root, j corresponds to the short one. After that it becomes clear, how to get all matrix units E i , j , 1 i , j 48 with the help of the Weil group. Finally, as above, we can obtain all E i , j , 1 i 48, 48 < j 52, where i correspond to the short roots, and so all matrix units. 2 Lemma 3. If for some C GLn ( R ) we have C G ( R )C -1 = G ( R ),where R is a subring of R , then R = R. Proof. Suppose that R is a proper subring of R . Then CM n ( R )C -1 = M n ( R ), since the group G ( R ) generates the whole ring M n ( R ) (the previous lemma), and the group G ( R ) = CG ( R )C -1 generated the ring M n ( R ). It is impossible, since C GLn ( R ). 2 Proof of Theorem 1. We have just proved that is an automorphism of the ring R . Consequently, the composition of the initial automorphism and some basis change with a matrix C GLn ( R ), (mapping G ( R ) into itself) is a ring automorphism . It proves Theorem 1. 2 5. Theorem about normalizers and main theorem To prove the main theorem of this paper (see Theorem 3 in the end of this section), we need to obtain the following important fact (that has proper interest): Theorem 2. Every automorphism­conjugation of a Chevalley group G ( R ) of type F with 1/2 is an inner automorphism. Proof. Suppose that we have some matrix C = (c i , j ) GL52 ( R ) such that
4

over a local ring R

C · G · C -1 = G .
If J is the radical of R , then M n ( J ) is the radical in the matrix ring M n ( R ), therefore

C · M n ( J ) · C -1 = M n ( J ),
consequently,

C · E + M n ( J ) · C -1 = E + M n ( J ),
i.e.,

C · G ( R , J ) · C -1 = G ( R , J ),
since G ( R , J ) = G ( E + M n ( J )).


Author's personal copy
2284 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

Thus, the image C of the matrix C under factorization R by J gives us an automorphism­ conjugation of the Chevalley group G (k), where k = R / J is a residue field of R . But over a field every automorphism­conjugation of a Chevalley group of type F 4 is inner (see [24]), therefore a conjugation by C (denote it by i C ) is

iC = i g ,
where g G (k). Since over a field our Chevalley group (of type F 4 ) coincides with its elementary subgroup, every its element is a product of some set of unipotents x (t )) and the matrix g can be decomposed into a product xi (Y 1 )... xi N (Y N ), Y 1 ,..., Y N k . 1 Since every element Y 1 ,..., Y N is a residue class in R , we can choose (arbitrarily) elements y 1 Y 1 , . . . , y N Y N , and the element

g = xi ( y 1 )... xi N ( y N ) 1
satisfies g G ( R ) and g = g . Consider the matrix C = g -1 d-1 C . This matrix also normalizes the group G ( R ), and also C = E . Therefore, from the description of the normalizer of G ( R ) we come to the description of all matrices from this normalizer equivalent to the unit matrix modulo J . Therefore we can suppose that our initial matrix C is equivalent to the unit modulo J . Our aim is to show that C G ( R ). Firstly we prove one technical lemma that we will need later. Lemma 4. Let X = t 1 (s1 )... t 4 (s4 )x1 (t 1 )... x24 (t 24 )x-1 (u 1 )... x-24 (u 24 ) G ( R , J ). Then the matrix X has such 53 coefficients (precisely described in the proof of lemma), that uniquely define all s1 ,..., s4 , t 1 ,..., t 24 , u 1 ,..., u 24 ,. Proof. Consider the sequence of roots:

1 = 1 , 2 = 5 = 1 + 2 , 3 = 8 = 1 + 2 + 3 , 4 = 12 = 1 + 2 + 23 , 5 = 15 = 1 + 2 + 23 + 4 , 6 = 17 = 1 + 22 + 23 + 4 , 7 = 19 = 1 + 22 + 33 + 4 , 8 = 21 = 1 + 22 + 33 + 24 , 9 = 22 = 1 + 22 + 43 + 24 , 10 = 23 = 1 + 32 + 43 + 24 , 11 = 24 = 21 + 32 + 43 + 24 .
All roots of F 4 , except 14 and 18 , are differences between two distinct roots of this sequence (or its member). Besides, 1 is a simple root, 11 is a maximal root of the system, every root of the sequence is obtained from the previous one by adding some simple root.


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2285

Consider in the matrix X some place (, ), , . To find an element on this position we need to define all sequences of roots 1 ,..., p , satisfying the following properties: 1. + 1 , + 1 + 2 , . . . , + 1 + ··· + i , . . . , + 1 + ··· + p = . 2. In the initial numerated sequence 1 ,..., 24 , -1 ,..., -24 the roots 1 ,...,k are replaced strictly from right to left. Finally in the matrix X on the position (, ) there is the sum of all products ±1 · 2 ... p by all

... s4 4 . If = , we must add 1 sequences with these two properties, multiplying to d = s1 1 to the sum. We will find the obtained elements s1 ,..., s4 , t 1 ,..., tm , u 1 ,..., um step by step. Firstly we consider in the matrix X the position (-11 , -11 ). We cannot add to the root -11 any negative root to obtain a root in the result. If in a sequence 1 ,..., p the first root is positive, then all other roots must be positive. Thus, this position contains an element 1 · d . So we know d-11 . By the previous arguments if we consider the position (-11 , -10 ), the suitable sequence is only 1 = 11 - 10 . Since there is d-11 t 1 on this position and we already know d-11 , we can find t 1 on the position (-24 , -23 ). Considering the positions (-10 , -10 ) and (-10 , -11 ), we see that by similar reasons there are d-10 (1 ± u 1 t 1 ) and ±d-10 u 1 there. So we find d-10 and u 1 . Now we come to the second step. As we have written above, in the matrix X on the position (-10 , -9 ) there is d-10 (±t 2 ± u 1 t 5 ); on the position (-9 , -10 ) there is d-9 (±u 2 ± u 5 t 1 ); on the position (-11 , -9 ) there is ±d-11 t 5 (the second summand is absent, since 1 is staying earlier than 2 ); on the position (-9 , -11 ) there is d-9 (±u 5 ± u 2 u 1 ); finally, on the position (-9 , -9 ) there is d-9 (1 +±u 5 t 5 ± u 2 t 2 ). From the position (-11 , -9 ) we find t 5 , then from the position (-10 , -9 ) we find t 2 , and from other three positions together we can know u 2 , u 5 , d-9 . Therefore, now we know t 1 , t 2 , t 5 , u 1 , u 2 , u 5 , d-9 , d-10 , d-11 . On the third step we consider the positions (-9 , -8 ) with d-9 (±t 3 ± u 2 t 6 ± u 5 t 8 ), (-8 , -9 ) with d-8 (±u 3 ± t 2 u 6 ± t 5 u 8 ), (-10 , -8 ) with d-10 (±t 6 ± u 1 t 8 ), (-8 , -10 ) with d-8 (±u 6 ± u 2 u 3 ± t 1 u 8 ), (-11 , -8 ) with d-11 (±t 8 ± t 5 t 3 ), (-8 , -11 ) with d-8 (±u 8 ± u 3 u 2 u 1 ± u 6 u 1 ), and (-8 , -8 ) with d-8 (1 ± u 3 t 3 ± u 5 t 5 ± u 8 t 8 ± u 8 t 5 t 3 ). From these seven equations with seven unknown variables (all of them from radical) we can find all variables t 3 , u 3 , t 6 , u 6 , t 8 , u 8 and d-8 . Similarly on the next step we consider the positions (-8 , -7 ), (-7 , -8 ), (-9 , -7 ), (-7 , -9 ), (-10 , -7 ), (-7 , -10 ), (-11 , -7 ), (-7 , -11 ), and (-7 , -7 ), and find t 4 , u 4 , t 7 , u 7 , t 9 , u 9 , t 11 , u 11 , d-7 . Now we know d-7 , d-8 , d-9 , d-10 and d-11 , i.e., s4 /s3 , /s4 , s2 /s3 , s1 /s2 and /s1 . So we know all si , i = 1,..., 4, , and, consequently, all d-i . Suppose now that we know all elements t i , u j for all indices corresponding to the roots of the form p - q , 11 p , q > s. Consider the positions (-11 , -s ), (-s , -11 ), (-10 , -s ), (-s , -10 ), . . . , (-s+1 , -s ), (-s , -s+1 ) in the matrix X . Clear that on every place (-i , -s ), 1 i > s, there is sum of t p , where p is a number of the root i - s (if it is a root), and products of different elements ta , u b , where only one member of the product is not known yet, all other elements are known and lie in radical; and all this sum is multiplying to the known element d-i . The same situation is on the positions (-s , -i ), 1 i > s , but there is not t p , but u p without multipliers here. Therefore, we have exactly the same number of (not uniform) linear equations as the number of roots of the form ±(i - s ), with the same number of variables, in every equation exactly on variable has invertible coefficient, other coefficients are from radical, for distinct equations such variables are different. Clear that such a system has the solution, and it is unique. Consequently, we have made the induction step and now we know elements t i , u j for all indices, corresponding to the roots p - q , 11 p , q s . On the last step we know elements t i , u j for all indices, corresponding to the roots p - q , 11 p , q 1. Consider now in X the positions (-11 , h11 ), (h11 , -11 ), (-10 , h10 ), (h10 , -10 ), . . . , (-1 , h1 ), (h1 , -1 ). Similarly to the previous arguments we can find all t and u , corresponding to the roots ±1 ,..., ±k . We have not found yet the obtained coefficients for two pairs of roots: ±14 and ±18 . Note that 14 + 18 = 24 . Consider in X the positions (-24 , -14 ), (-14 , -24 ), (-24 , -18 ), (-18 , -24 ). On these positions there are sums of t 18 (respectively, u 18 , t 14 , u 14 ), and products of elements t i , u j , corre-

,

,


Author's personal copy
2286 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

sponding to roots of smaller heights. Since for all heights smaller than the height of t , u , then we can directly find the obtained coefficients. Therefore, lemma is completely proved. 2

14 , we know

Now return to our main proof. Recall that we work with a matrix C , equivalent to the unit matrix modulo radical, and normalizing Chevalley group G ( R ). For every root we have

Cx (1)C -1 = x (1) · g ,
Every g G ( R , J ) can be decomposed into a product

g G ( R , J ).

(1)

t 1 (1 + a1 )... t 4 (1 + a4 )x1 (b1 )... x24 (b24 )x-1 (c 1 )... x

-24

(c 24 ),

(2)

where a1 ,..., a4 , b1 ,..., b24 , c 1 ,..., c 24 J (see, for example, [2]). Let C = E + X = E + (xi , j ). Then for every root we can write a matrix equation (1) with variables xi , j , a1 ,..., a4 , b1 ,..., b24 , c 1 ,..., c 24 , every of them is from radical. Let us change these equations. We consider the matrix C and "imagine", that it is some matrix from Lemma 4 (i.e., it is from G ( R )). Then by some its concrete 53 positions we can "define" all coefficients , s1 ,..., s4 , t 1 ,..., t 24 , u 1 ,..., u 24 in the decomposition of this matrix from Lemma 4. In the result we obtain a matrix D G ( R ), every matrix coefficient in it is some (known) function of coefficients of C . Change now Eq. (1) to the equations

D -1 Cx (1)C -1 D = x (1) · g ,

g G ( R , J ).

(3)

We again have matrix equations, but with variables y i , j , a1 ,..., a4 , b1 ,..., b24 , c 1 ,..., c 24 , every of them still is from radical, and also every y p ,q is some known function of (all) xi , j . The matrix D -1 C will be denoted by C . We want to show that a solution exists only for all variables with primes equal to zero. Some xi , j also will equal to zero, and other are reduced in the equations. Since the equations are very complicated we will consider the linearized system. It is sufficient to show that all variables from the linearized system (let it be the system of q variables) are members of some system from q linear equations with invertible in R determinant. In other words, from the matrix equalities we will show that all variables from them are equal to zeros. Clear that linearizing the product Y -1 ( E + X ) we obtain some matrix E + ( zi , j ), with all positions described in Lemma 4 equal to zero. To find the final form of the linearized system, we write it as follows:

( E + Z )x (1) = x (1) E + a1 T 1 + a2 ... ... E + a4 T l + a2 ... 1 4
2 2 2 · E + b1 X 1 + b2 X 1 /2 ... E + c 24 X -24 + c 24 X -24 /2 ( E + Z ), 1

where X is a corresponding Lie algebra element in the adjoint representation, the matrix T i is diagonal, has on its diagonal i , k on the place corresponding to v k ; on the places corresponding to the vectors V j , this matrix has zeros. Then the linearized system has the form

Zx (1) - x (1)( Z + a1 T 1 + ··· + a4 T 4 + b1 X 1 + ··· + c

24

X 24 ) = 0.


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2287

This equation can be written for every (naturally, with another a j , b j , c j ), and can be written only for generating roots: for 1 ,..., 4 , -1 ,..., -4 :

Zx (1) - x (1)( Z + a T + ··· + a T 1 1,1 1 4,1 4 1 + b1,1 X 1 + b2,1 X 2 + ··· + b24,1 X 24 + c 1,1 X -1 ... Zx (1) - x (1)( Z + a T + ··· + a T 4 4 1,4 1 4,4 4 + b1,4 X + ··· + X b24,1 X + c 1,4 X - + ··· + 1 24 24 1 ... Zx (1) - x (1)( Z + a T + ··· + a T - 1 - 1 1,5 1 4,5 4 + b1,5 X + ··· + b24,5 X + c 1,5 X - + ··· + c 24,5 1 24 1 ... Zx (1) - x (1)( Z + a T + ··· + a T - 4 - 4 1,8 1 4,8 4 + b1,8 X 1 + ··· + b24,8 X 24 + c 1,8 X -1 + ··· + c 24,8
The matrix T 1 is

+ ··· + c

24,1

X -24 ) = 0;

c 24,4 X -24 ) = 0;

X - 5 ) = 0;

X -24 ) = 0.

diag[2, -2, -1, 1, 0, 0, 0, 0, 1, -1, -1, 1, 0, 0, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, 0, 0, 1, -1, -1, 1, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, -1, 1, 1, -1, 0, 0, 0, 0];
T 2 is w 1 w 2 T 1 w -1 w -1 ; T 3 is 2 1

diag[0, 0, -2, 2, 2, -2, -1, 1, -2, 2, 0, 0, 1, -1, 0, 0, -1, 1, 2, -2, -1, 1, 2, -2, 1, -1, 0, 0, 1, -1, 0, 0, -1, 1, 0, 0, 1, -1, -2, 2, 0, 0, 2, -2, 0, 0, 0, 0, 0, 0, 0, 0];
The matrices X 1 , X 3 were written above. Besides them, X -1 = w 1 X 1 w -1 , X -3 = w 3 1 Other matrices X are obtained as follows: X ±5 = w 2 X ±1 w -1 , X ±2 = w 1 X ±5 w -1 , 2 1 w 3 X ±2 w -1 , X ±12 = w 1 X ±10 w -1 , X ±14 = w 2 X ±12 w -1 , X ±16 = w 4 X ±10 w -1 , 3 1 2 4 w 1 X ±16 w -1 , X ±20 = w 2 X ±18 w -1 , X ±22 = w 3 X ±20 w -1 , X ±23 = w 2 X ±22 w -1 , 1 2 3 2 the matrix T 4 is w 3 w 4 T 3 w -1 w -1 . 4 3 X 3 w -1 . 3 X ±10 = X ±18 = X ±24 =

w 1 X ±23 w -1 , X ±7 = w 4 X ±3 w -1 , X ±4 = w 3 X ±7 w -1 , X ±6 = w 2 X ±3 w -1 , X ±8 = w 1 X ±6 w -1 , 1 4 3 2 1 X ±9 = w 4 X ±6 w -1 , X ±11 = w 1 X ±9 w -1 , X ±13 = w 3 X ±9 w -1 , X ±15 = w 1 X ±13 w -1 , X ±17 = 4 1 3 1 w 2 X ±15 w -1 , X ±19 = w 3 X ±17 w -1 , X ±21 = w 4 X ±19 w -1 . 2 3 4 From Lemma 4 we obtain that the following positions of Z are zeros: (48, 48), (48, 46), (46, 46), (46, 48), (46, 44), (44, 44), (44, 46), (44, 42), (42, 42), (42, 44), (42, 38), (38, 38), (38, 42), (48, 44), (44, 48), (46, 42), (42, 46), (44, 38), (38, 44), (48, 42), (42, 48), (46, 38), (38, 46), (24, 2), (2, 24), (48, 38), (38, 48), (24, 49), (49, 24), (46, 34), (34, 46), (48, 36), (36, 48), (48, 34), (34, 48), (44, 24), (24, 44), (48, 30), (30, 48), (48, 28), (28, 48), (38, 51), (51, 38), (48, 24), (24, 48), (48, 16), (16, 48), (48, 10), (10, 48), (48, 2), (2, 48), (48, 49), (49, 48). Suppose that we fixed the obtained uniform linear system of equation. Recall that our aim is to show that all values zi , j , as,t , b s,t , c s,t are equal to zero. Consider the first condition. It implies a4,1 = 0 (pos. (42, 42)); a1,1 = 0 (pos. (48, 48)); a3,1 = 0 (pos. (38, 38)); a2,1 = 0 (pos. (39, 39)). Therefore, T 1 , T 2 , T 3 , T 4 do not entry to this condition. Later, c 1,1 = 0 (pos. (3, 9)); b2,1 = 0 (pos. (3, 51)); c 2,1 = 0 (pos. (46, 44)); b3,1 = 0 (pos. (5, 51)); c 3,1 = 0 (pos. (6, 51)); b4,1 = 0 (pos. (7, 51)); c 4,1 = 0 (pos. (8, 51)); b5,1 = 0 (pos. (44, 48)); c 5,1 = 0 (pos. (10, 51)); b6,1 = 0 (pos. (3, 6)); c 6,1 = 0 (pos. (46, 42)); b7,1 = 0 (pos. (13, 51)); c 7,1 = 0 (pos. (14, 51)); b8,1 = 0 (pos. (42, 48)); c 8,1 = 0 (pos. (16, 52)); b9,1 = 0 (pos. (17, 51)); c 9,1 = 0 (pos. (46, 38)); b10,1 = 0 (pos. (19, 51)); b11,1 = 0 (pos. (38, 48)); c 11,1 = 0 (pos. (22, 51)); c 12,1 = 0


Author's personal copy
2288 E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289

(pos. (24, 51)); b13,1 = 0 (pos. (25, 51)); c 13,1 = 0 (pos. (46, 34)); b14,1 = 0 (pos. (27, 52)); c 14,1 = 0 (pos. (28, 51)); b15,1 = 0 (pos. (34, 48)); c 15,1 = 0 (pos. (30, 51)); b16,1 = 0 (pos. (31, 52)); c 16,1 = 0 (pos. (46, 28)); b17,1 = 0 (pos. (33, 51)); c 17,1 = 0 (pos. (34, 51)); b18,1 = 0 (pos. (20, 44)); c 18,1 = 0 (pos. (36, 52)); b19,1 = 0 (pos. (37, 51)); c 19,1 = 0 (pos. (38, 51)); b20,1 = 0 (pos. (39, 51)); c 20,1 = 0 (pos. (40, 51)); b21,1 = 0 (pos. (41, 52)); c 21,1 = 0 (pos. (42, 52)); b22,1 = 0 (pos. (43, 51)); c 22,1 = 0 (pos. (44, 51)); b23,1 = 0 (pos. (3, 44)); c 24,1 = 0 (pos. (10, 43)). Consequently the right side of the condition contains only X 12 , X 24 , X -10 , X -23 , the condition itself is simplified, many elements of Z are equal to zero. Firstly, these are elements on the positions (i , j ), i = 2, 3, 5, 6, 7, 8, 10, 11, 13, 14, 16, 17, 19, 22, 24, 25, 27, 28, 30, 31, 33, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 48, 50, 51, 52, j = 1, 4, 9, 12, 15, 18, 20, 21, 23, 26, 29, 32, 35, 46, 47, 49 (except z6,15 = c 10,1 , z5,12 = b12,1 , z7,29 = c 10,1 , z8,26 = b12,1 , z24,49 = -c 10,1 , z28,35 = c 23,1 , z27,32 = b24,1 , z33,26 = -b24,1 , z34,29 = -c 23,1 , z37,18 = b24,1 , z38,21 = c 23,1 , z38,18 = c 10,1 , z39,47 = -c 10,1 , z39,20 = b24,1 , z40,23 = c 23,1 , z41,12 =-b24,1 , z42,15 =-c 23,1 , z43,4 = b24,1 , z44,9 = c 23,1 , z45,49 =-b24,1 ). When we make these elements equal to zero, we see that b12,1 = 0 (pos. (19, 2)), c 10,1 = 0 (pos. (44, 36)), b24,1 = 0 (pos. (45, 2)), c 23,1 = 0 (pos. (48, 2)), i.e., the condition now looks as x1 (1) Z = Zx1 (1). By similar way finally all our conditions become of the form x± p (1) Z = Zx± p (1), p = 1,..., 4. Since the centralizer of the given eight matrices consists of scalar matrices, and the matrix Z has a zero element z52,52 , we have that Z = 0, what we need. Theorem 2 is proved. 2 From Theorems 1 and 2 directly follows the main theorem of the paper: Theorem 3. Let G ( R ) be a Chevalley group with root system F 4 , where R is a local ring with 1/2. Then every automorphism of G ( R ) is standard, i.e., it is a composition of ring and inner automorphisms. This composition is unique. Proof. We need only to prove the uniqueness. Suppose that for some automorphism Aut(G ( R )) we have i g1 1 = i g2 2 , g 1 , g 2 G ( R ), 1 , 2 are ring automorphisms. Then i g -1 g = 1 2 , i.e., some ring automorphism is inner, i g = .
2 1

Since any ring automorphism is identical on all x (1), , then g commutes with all x (1), . So by [3] g belongs to the center of G ( R ), i.e., i g is identical. Consequently, i g1 = i g2 , 1 = 2 . 2 Corollary 1. The group Aut G ( R ) is a semi-direct product of G ( R ) and Aut R. Acknowledgments The author is thankful to N.A. Vavilov, A.A. Klyachko, A.V. Mikhalev for valuable advices, remarks and discussions. References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]

E. Abe, Automorphisms of Chevalley groups over commutative rings, Algebra i Analiz 5 (2) (1993) 74­90. E. Abe, Chevalley groups over local rings, Tohoku Math. J. 21 (3) (1969) 474­494. E. Abe, J. Hurley, Centers of Chevalley groups over commutative rings, Comm. Algebra 16 (1) (1988) 57­74. A. Borel, J. Tits, Homomorphismes "abstraits" de groupes algÈbriques simples, Ann. Math. 73 (1973) 499­571. E.I. Bunina, Automorphisms of elementary adjoint Chevalley groups of types Al , D l , E l over local rings, Algebra Logic 48 (4) (2009) 250­267, arXiv:math/0702046. E.I. Bunina, Automorphisms of adjoint Chevalley groups of types B 2 and G 2 over local rings, J. Math. Sci. 155 (6) (2008) 795­814. E.I. Bunina, Automorphisms and normalizers of Chevalley groups of types Al , D l , E l over local rings with 1/2, Fundam. Prikl. Mat. 15 (1) (2009), arXiv:0907.5595. N. Bourbaki, Groupes et AlgÈbres de Lie, Hermann, 1968. R.W. Carter, Simple Groups of Lie Type, 2nd ed., Wiley, London, 1989. R.W. Carter, Yu. Chen, Automorphisms of affine Kac­Moody groups and related Chevalley groups over rings, J. Algebra 155 (1993) 44­94. Yu. Chen, Isomorphic Chevalley groups over integral domains, Rend. Semin. Mat. Univ. Padova 92 (1994) 231­237.


Author's personal copy
E.I. Bunina / Journal of Algebra 323 (2010) 2270­2289 2289

[12] Yu. Chen, On representations of elementary subgroups of Chevalley groups over algebras, Proc. Amer. Math. Soc. 123 (8) (1995) 2357­2361. [13] Yu. Chen, Automorphisms of simple Chevalley groups over Q-algebras, Tohoku Math. J. 348 (1995) 81­97. [14] Yu. Chen, Isomorphisms of adjoint Chevalley groups over integral domains, Trans. Amer. Math. Soc. 348 (2) (1996) 1­19. [15] Yu. Chen, Isomorphisms of Chevalley groups over algebras, J. Algebra 226 (2000) 719­741. [16] I.Z. Golubchik, A.V. Mikhalev, Isomorphisms of unitary groups over associative rings, Zap. Nauchn. Sem. LOMI 132 (1983) 97­109 (in Russian). [17] I.Z. Golubchik, A.V. Mikhalev, Isomorphisms of the general linear group over associative ring, Vestnik Moskov. Univ. Ser. Mat. 3 (1983) 61­72 (in Russian). [18] J.F. Humphreys, On the automorphisms of infinite Chevalley groups, Canad. J. Math. 21 (1969) 908­911. [19] J.E. Humphreys, Introduction to Lie Algebras and Representation Theory, Springer-Verlag, New York, 1978. [20] Fuan Li, Zunxian Li, Automorphisms of SL3 ( R ), GL3 ( R ), Contemp. Math. 82 (1984) 47­52. [21] Anton A. Klyachko, Automorphisms and isomorphisms of Chevalley groups and algebras, arXiv:math/0708.2256v3, 2007. [22] V.M. Petechuk, Automorphisms of matrix groups over commutative rings, Mat. Sb. 45 (1983) 527­542. [23] V.M. Petechuk, Automorphisms of groups SLn , GLn over some local rings, Math. Notes 28 (2) (1980) 187­206. [24] R. Steinberg, Lectures on Chevalley Groups, Yale University, 1967. [25] R. Steinberg, Automorphisms of finite linear groups, Canad. J. Math. 121 (1960) 606­615. [26] N.A. Vavilov, E.B. Plotkin, Chevalley groups over commutative rings. I. Elementary calculations, Acta Appl. Math. 45 (1996) 73­115. [27] W.C. Waterhouse, Introduction to Affine Group Schemes, Springer-Verlag, New York, 1979. [28] E.I. Zelmanov, Isomorphisms of general linear groups over associative rings, Siberian Math. J. 26 (4) (1985) 49­67 (in Russian).