Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://num-meth.srcc.msu.ru/english/zhurnal/tom_2005/ps/v6r112.ps
Äàòà èçìåíåíèÿ: Mon Apr 25 15:18:13 2005
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 22:59:48 2012
Êîäèðîâêà:
Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su) 109
UDC 538.975;621.382.13:535
STABILIZATION OF COMPUTATIONAL ALGORITHMS FOR THE
CHARACTERIZATION OF THIN FILM COATINGS
A. V. Tikhonravov 1 and M. K. Trubetskov 1
Development of stable computational algorithms for the on­line characterization of thin film op­
tical coatings is a key to the success of their application in many challenging technological areas.
This paper presents a general idea that enables one to develop computationally effective and stable
characterization algorithms for practically all modern production environments used for optical coat­
ing manufacturing. The efficiency of one of such algorithms is demonstrated using a new research
methodology called computational manufacturing.
1. Introduction. Computational methods play an increasingly important role in thin film optics, which is
a research area with many technological applications. Thin film optical coatings are core elements of nearly all
modern opto­electronic devices [1] and the demand for various types of these elements is permanently growing.
Production of high­quality optical coatings for modern challenging applications requires the computational
design of coatings with complicated structures and the successful practical implementation of the theoretical
designs obtained.
Computational design of optical coatings is a complex multiextremal optimization problem. In recent
years, a tremendous progress has been observed in the development of optimization techniques for solving this
problem [2 -- 5]. As a result, the main attention of researchers is now switched to the successful manufacturing of
coatings with complicated design structures. Optical coatings are usually produced in special vacuum chambers
by successive depositions of thin films of various dielectric materials [6]. These films form the layers of an
optical coating. Thicknesses of coating layers are specified by a theoretical design of this coating and their
reliable control is one of the main problems for all coating deposition techniques. A successful control of layer
thicknesses requires accurate monitoring of their values in time.
There are two major approaches to monitoring the thicknesses of coating layers during their depositions.
The first one is called quartz crystal monitoring. This approach enables measuring the deposition rates of thin
film materials in time. The deposition rate is defined as a rate of increase of thin film thickness. It is usually
measured in angstroms ( š A) per second or in nanometers (nm) per second. Recall that š A is 10 \Gamma10 m, while
nm is 10 \Gamma9 m. Typical values of deposition rates vary from several š A/sec to several nm/sec, depending on a
deposition process. When quartz crystal monitoring is used, layer thicknesses are monitored by integrating the
measured deposition rates.
Another approach to thickness monitoring is called optical monitoring. In fact, this is not a single approach
but a wide set of different approaches whose common feature is measuring of optical response from a deposited
coating inside a deposition chamber. Usually, reflectance R or transmittance T of a deposited coating is measured
either at a single monitoring wavelength – 0 (single wavelength optical monitoring) or at a set of wavelengths in
a broad spectral band (broadband optical monitoring). Thicknesses of deposited layers are determined on the
basis of R or T measurement data each time when measurements are performed. Obviously, this determination
is a typical inverse recognition problem.
Currently, there exists a large variety of different optical monitoring schemes and an even larger variety of
different strategies for optical data acquisition. Respectively, there is a large variety of computational algorithms
for solving the above mentioned inverse recognition problem. The general term ``optical characterization'' is
applied to the process of determination of optical coating parameters with the aid of optical measurement
data [1]. In the following, we shall often use the term ``characterization algorithm'' to specify an algorithm for
determining optical coating parameters, in particular, coating layer thicknesses.
A control of optical coating manufacturing is based on results provided by a characterization algorithm
used to process on­line monitoring data. Deposition of a coating layer is terminated when, according to these
results, the layer thickness reaches a value prescribed by a theoretical coating design. Errors in characterization
1 Research Computing Center of Moscow State University, 119992, Moscow, Russian Federation; e­mail:
tikh@srcc.msu.su, trub@srcc.msu.su
c
fl Research Computing Center of Moscow State University

110 Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su)
results cause errors in the layer thicknesses of a manufactured optical coating. As a result, its properties differ
from the properties predicted by a theoretical coating design.
Errors in layer thicknesses are the main reason for the degradation of optical properties of a manufactured
optical coating. Some coatings are especially sensitive to these errors. Complicated coatings for modern appli­
cations may have many dozens or sometimes hundreds of layers and even small relative errors in thicknesses of
their layers may be fatal for these coatings. In this respect, high accuracy of on­line characterization algorithms
is extremely important.
Quite often, inaccurate on­line characterization results are connected with the practical instability of the
on­line characterization algorithm being used. We employ the term ``practical instability'' to designate the
situations when errors in layer thicknesses become many times higher than their average value and when the
successive on­line determinations of a growing layer thickness give entirely different or physically absurd results.
The main goal of this paper is to present a simple, yet general idea for avoiding the practical instability of
characterization algorithms and, accordingly, for increasing the accuracy of on­line characterization results.
This idea and its practical implementation are discussed in Section 3 of this paper.
Unfortunately, it is very difficult to provide a detailed practical testing of any new computational algorithm
for the on­line characterization of optical coatings. First of all, it is too time­consuming, because a single run
of a deposition chamber for a manufacturing experiment may take many hours. Secondly, it is too expensive,
because this run may cost many thousand dollars. In connection with this fact, a special significance has a new
research area that we have named computational manufacturing [7]. Investigations in this area were started
several years ago [8, 9], but peculiarities of its research methods and its main research goals were outlined
only recently [7]. Computational manufacturing can be used, in particular, to test characterization algorithms.
Because the basic ideas of computational manufacturing are not yet widely known, we discuss them in Section 2
in connection with the main goal of this paper.
Final conclusions are given in Section 4, where we also discuss new horizons for further applications of the
general idea presented here.
2. Computational manufacturing as a tool for testing on­line characterization algorithms.
Computational manufacturing takes the same place between theoretical designing and practical manufactur­
ing as computational physics takes between theoretical and experimental physics. Its practical importance is
connected with the fact that computational manufacturing experiments are much cheaper and faster than real
deposition experiments. This fact is important, in particular, for testing on­line characterization algorithms,
because multiple experiments with different coatings and under different deposition and monitoring conditions
can easily be performed.
Simulation
o oatin
o ition
Simulation
o o ti al
monito in
ont ol
o t a
n lin
a a t ation
al o it m
Fig. 1. General structure of the software for computational manufacturing: arrows indicate the directions
of information flow
A general scheme of experiments on computational manufacturing is shown in Figure 1. All blocks presented
in this figure are realized as separate software modules. The two left blocks simulate coating deposition in a real
deposition chamber and data acquisition in a real monitoring device. The control software simulates processes
in a control unit of a real deposition plant; in particular, it supplies signals for starting and terminating the
layer depositions and for acquiring optical monitoring data. It is an intellectual center of the computational
manufacturing software, because it takes decisions about terminating the layer depositions on the basis of an
analysis of on­line monitoring data. This analysis is performed using the on­line characterization algorithm.
The above­presented scheme of experiments on computational manufacturing has been already implemented
within the OptiLayer software [10] for the case of broadband optical monitoring. Below we use some results
obtained with the aid of this software to demonstrate the main steps of computational manufacturing exper­
iments. We also present a number of results illustrating the situation with the practical instability of on­line
characterization algorithms.

Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su) 111
0
0
0
0
0
00
00 00 00 00 000 00 00
Fig 2. Theoretical transmittance of the 42­layer hot mirror
For our computational experiments we use a theoretical design obtained by the OptiLayer software for
the so­called hot mirror design problem. The goal of this problem is to design a multilayer optical coating
which transmits almost all light in the visible spectral region from 400 nm to 700 nm and reflects almost all
infrared radiation in the spectral region from 700 nm to 1200 nm. The thus­obtained design has 42 layers of two
alternating thin film materials with refractive indices equal to 2.35 and 1.45 (typical values for titanium dioxide
and silica dioxide used widely as thin film materials). These materials all called high and low index materials,
respectively. Spectral transmittances of the theoretical design obtained is shown by the solid curve in Figure 2.
0 0 00 0 00 0
0
Fig. 3. Simulated deposition rate of one of the thin film materials
Computational manufacturing experiments require a simulation of principal factors causing production
errors. One of the main factors related to the coating deposition process is instability of deposition rates of
thin film materials. Time dependencies of these rates can be represented as random processes with rather small
correlation times. Figure 3 illustrates the simulated deposition rate of the high index material with a mean rate
value of 4 š A/sec, rate fluctuations of 1 š A/sec, and a correlation time of 3 sec. The simulated deposition rate of
the low index material has a mean rate value of 8 š A/sec, rate fluctuations of 2 š A/sec, and a correlation time of
3 sec. These rates are used in the computational experiments discussed below. The specified parameters are
close to those of deposition rates of modern deposition processes.
The total thickness of all layers of the 42­layer design obtained is more than 4000 nm. With the above
rate values, the net time required for depositing such a coating is close to 2.5 hours. In fact, a production time
is much larger, because a considerable time is spent for the deposition chamber preparation, there are time
intervals between layer depositions, etc. Computational experiments are performed in an internal time scale in

112 Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su)
which time runs hundred times faster than in reality and results can be obtained in a few minutes.
400 00 00
40
0
0
0
0
0
Fig. 4. One of the arrays of simulated measurement data acquired during the deposition of the ninth layer of
the hot mirror (solid curve) and theoretical transmittance at the end of this layer deposition (dashed curve)
Modern broadband optical monitoring devices enable measuring of coating transmittance/reflectance at
hundreds of spectral points in a few milliseconds. In the computational experiments presented here, we simulate
optical monitoring of coating transmittance at 501 spectral points equally distributed in the wavelength region
from 400 nm to 900 nm. There are different types of errors associated with transmittance measurements. In the
examples considered below, we simulate random errors in measurement data, which means that the measured
transmittance value at the spectral point – j differs from the true transmittance value at this point for a random
value ffi T j . Random errors in transmittance data are normally distributed and their standard deviation is called
the level of random errors. The solid curve in Figure 4 presents one of the arrays of transmittance data acquired
during the computational deposition of the ninth layer of the hot mirror. This curve connects 501 measured
transmittance values. The level of random errors in Figure 4 is equal to 1%.
By fT meas (t; – j )g we denote the array of measured transmittance data acquired by the optical monitoring
device at the time instant t. The control software is used to prescribe how often transmittance measurements
should be performed. In practice, time intervals between measurement data acquisitions may vary from fractions
of seconds to several seconds. The main purpose of the control software is to find the appropriate time instants
for terminating layer depositions. For this purpose, the arrays of measured transmittance data are analyzed
immediately after their acquisitions. It is clear that the analysis of measured data should be as fast as possible.
Accordingly, the on­line characterization algorithms used by the control software should work very fast.
All on­line characterization algorithms are based on the comparison of measured transmittance data
with theoretically calculated transmittance data for an optical coating with growing layer thicknesses. By
fd t
1 ; : : : ; d t
m g we denote the thicknesses of layers of a theoretical coating design. Here m is the total number of
coating layers. Suppose that k coating layers have been already deposited and the (k + 1)th layer is currently
deposited. The thicknesses of deposited layers are not equal to the theoretical values d t
1 ; : : : ; d t
k because of
production errors associated with previous layer depositions. Denote the actual thicknesses of these layers by
d a
1 ; : : : ; d a
k . The actual thicknesses of deposited layers are not known precisely; instead of them, some estimates
for these thicknesses were obtained by the control software at the previous deposition steps. We denote these
estimated thickness values by d e
1 ; : : : ; d e
k . Let d be the growing thickness of the (k + 1)th layer. An on­line
characterization algorithm should be able to determine this value at any time instant when measurements are
done.
Let us consider a basic scheme of the on­line characterization algorithm that got the name ``sequential
algorithm'' in one of our previous publications [11]. Denote by T (d e
1 ; : : : ; d e
k ; d; –) the theoretical transmittance
of the (k + 1)­layer system with the fixed thicknesses d e
1 ; : : : ; d e
k of the first k layers and a variable thickness d of
the (k + 1)th layer. This theoretical transmittance depends also on the incident light wavelength –. Methods
for calculating the theoretical transmittance of any multilayer system are well known and their description can
be found on the Internet [12].

Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su) 113
Let us introduce the discrepancy function
F t (d) =
J
X
j=1
Ÿ T (d e
1 ; : : : ; d e
k ; d; – j ) \Gamma T meas (t; – j )
\DeltaT j
– 2
; (1)
where t is the time instant at which the array of measured transmittance data was acquired, \DeltaT j are mea­
surement data tolerances which account for measurement data accuracy at various spectral points, and the
summation is performed over all spectral points at which measurements were made (J is the total number of
data in the array of measured transmittance data).
A sequential characterization algorithm is basically an algorithm of a one­dimensional minimization of the
discrepancy function with respect to the unknown thickness of the (k + 1)th layer. There could be numerous
versions of the sequential algorithm connected with different minimization approaches or modifications of the
discrepancy function. For example, some preliminary integration or smoothing of experimental data can be
performed in the situations when arrays of measurement data are acquired in short (fractions of seconds) time
intervals.
A version of the sequential algorithm is realized in our OptiReOpt software that can be used for the on­line
determination of a growing layer thickness [13]. The same algorithm is used by the computational manufacturing
option of the OptiLayer software. Computational experiments whose results are shown in Figures 5 -- 7 were
performed using this characterization algorithm.
400 00 00
0
0
40
0
0
00
Fig. 5. Comparison of the transmittance of computationally manufactured coating (dashed curve) with the
transmittance of theoretical design (solid curve): 0.2 % random errors in transmittance data, the two­second
time intervals between data measurements
Random errors of modern on­line monitoring devices are usually lower than 1 %. Figures 5 and 6 illustrate
some final results of one of the computational experiments with the 0.2 % level of random errors in measured
transmittance data. In these experiments, the time intervals between data measurements were specified equal to
two seconds. In Figure 5 the dashed curve presents the transmittance of computationally manufactured coating,
while the solid curve is the transmittance of the theoretical design (the same curve as in Figure 2). Figure 6
shows relative errors in the layer thicknesses of the computationally manufactured coating. These errors are
defined as relative deviations of the actual thickness values d a
k from the theoretical thickness values d t
k .
The computational experiment illustrated by Figures 5 and 6 should be considered as a successful one.
Deviations of the coating transmittance from the theoretical transmittance are inevitable in practice and the
level of deviations observed in Figure 5 is quite acceptable. Other computational experiments with the same
parameters of deposition rates and on­line measurements give qualitatively the same results as depicted in
Figures 5 and 6.
Obviously, an increase in levels of error factors leads, in general, to a decrease in accuracy of thickness moni­
toring. One should expect the same consequence from increasing the time intervals between data measurements.
Multiple computational experiments confirm this expectation. An increase of production and measurement er­
rors also results in a loss of stability of computational manufacturing, which means that some experiments give
practically acceptable results, while other experiments demonstrate a total failure of manufacturing. This loss

114 Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su)
0 0 0 0 0
0
D
Fig. 6. Relative errors in layer thicknesses of the computationally manufactured coating with the
transmittance shown by the dashed curve in Figure 5
of stability is usually connected with a failure of the characterization algorithm to determine thicknesses of some
coating layers correctly. We refer to such situations as to the practical instability of the on­line characterization
algorithm.
0 0 0 0 0
0
0
0
0
0
0
D
Fig. 7. Relative errors in thicknesses of the deposited hot mirror layers causing a failure of manufacturing:
manufacturing was interrupted after the deposition of 32 layers
Figure 7 illustrates a situation with the practical instability of the on­line characterization procedure. This
figure presents the results of one of the computational experiments with the 0.5 % level of random errors in
measured transmittance data and the four­second time intervals between data measurements. The computa­
tional experiment was interrupted after the deposition of 32 hot mirror layers because of too high errors in the
thicknesses of several deposited layers. Such errors entirely destroy spectral performance of the hot mirror.
Multiple computational experiments with the 0.5 % transmittance errors and the four­second time intervals
between data measurements show a failure of manufacturing in nearly one third of the experiments performed.
In all the cases studied, this failure is caused by the instability of the on­line characterization algorithm.
3. Stabilization of the on­line characterization algorithms. The instability of the on­line character­
ization algorithms is connected not only with the errors in measurement data T meas (t; – j ) in equation (1) but

Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su) 115
also with the inaccuracies in the determined layer thicknesses d e
1 ; : : : ; d e
k of the previously deposited layers. It
was shown in our previous work [11] that it is possible to obtain more stable on­line characterization results
by using the algorithms that enable redeterminations of thicknesses of previously deposited layers at each new
step of the on­line characterization procedure. In [11] such algorithms were named the triangular algorithms.
They require introducing discrepancy functions that utilize not only the last array of measured transmittance
data as in equation (1) but all previously acquired measurement data.
Unfortunately, the triangular algorithms are much more time­consuming than the sequential ones. Their
computational speed slows down significantly with the growing number of coating layers and with decreasing
the time intervals between data measurements. For these reasons, they are not suitable for the on­line control
of layer thicknesses, especially when complicated coatings with many layers are deposited.
In this paper we present a simple idea that can be used to improve the stability of characterization al­
gorithms in any modern production environment. This idea consists in stabilizing the discrepancy function
minimization by taking into account material deposition rates.
Optical monitoring is sometimes combined with quartz crystal monitoring based on measuring material
deposition rates. In such situations, on­line records of time dependencies of deposition rates are available in
parallel with arrays of optical monitoring data. Some estimates for deposition rates are also available in the
situations when quartz crystal monitoring is not used. Modern deposition processes are usually rather stable,
which means that mean deposition rates are well reproduced from one deposition run to the next deposition
run. Hence, the deposition rates can be estimated on the basis of the results of previous deposition experiments
and, quite often, with an accuracy up to a few percent. Such a good estimate for a growing layer thickness gives
the value rt, where r is the deposition rate of a current layer and t is the time passed from the beginning of its
deposition.
Instead of the discrepancy function (1), we consider the new function
\Phi t (d) = F t (d) + ff d \Gamma rt) ; (2)
where\Omega is a stabilizer and ff is a control parameter which will be traditionally referred to as a regularization
parameter.
The simplest form of a stabilizer is as follows:
\Omega\Gamma d \Gamma rt) = (d \Gamma rt) 2 : (3)
Other even power values may be used instead of the power 2 in equation (3). If the deposition rate r
varies in time, then one should substitute rt in equation (3) for
R t
0 r(t) dt. The choice of the parameter ff in
equation (2) is discussed below.
To improve the stability of characterization algorithms, we propose to minimize the function (2) instead
of the discrepancy function (1). The essence of this idea is schematically illustrated in Figure 8. Due to the
above­discussed inaccuracy of the discrepancy function (1), its minima can be located far from the true layer
thickness value at the time instant t. Adding a stabilizer to the discrepancy function (1) enables one to exclude
the certainly wrong solutions for d that are located far away from the estimate of rt provided by the material
deposition rate.
Consider one of the possible approaches to choosing a value of the parameter ff in equation (2). Suppose
that the (k+1)th coating layer is currently deposited. An inaccuracy of the discrepancy function (1) is associated
with errors in transmittance data and with inaccuracies of the estimated thickness values d e
1 ; : : : ; d e
k . Let us
estimate the inaccuracy of F t (d) associated with transmittance errors by the value
ffi 2 =
J
X
j=1
! ffi T j ? 2
(\DeltaT j ) 2 ; (4)
where ! ffi T j ? is the estimated root mean square (rms) level of errors in transmittance data and \DeltaT j are the
same tolerance values as in equation (1).
By ¯ 2
k we denote the value of the discrepancy function at the end of the deposition of the kth layer. We use
this value as an estimate of the discrepancy function inaccuracy associated with the inaccuracies of d e
1 ; : : : ; d e
k .
The total inaccuracy of F t (d) is estimated by ffi 2 + ¯ 2
k . We will also use the last value as an estimate for the
minimal values of F t (d) during the deposition of the (k + 1)th layer (see Figure 8).
Suppose that we can estimate the deposition rate r with an accuracy of m %. This means that the simplest
scheme of monitoring the (k+1)th layer by the deposition time t = d t
k+1 =r enables one to obtain the thickness

116 Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su)
0 d
0
2 2
+
d
d
aW d
Fig. 8. Schematic illustration of the idea of discrepancy function stabilization: solid curve --- the discrepancy
function at the time instant t, dashed curve --- the stabilizer at the time instant t, squares --- possible
solutions for d without stabilization
of this layer with the same accuracy of m %. Let us choose a value of ff such that
ff\Omega
` m%
100 % d t
k+1
'
= ffi 2 + ¯ 2
k : (5)
Recall that the sum in the right­hand side of equation (4) is an estimate for the minimal values of F t (d).
Choosing ff in accordance with equation (4) provides a fast increase of
ff\Omega when d deviates from rt more than
\Deltad = (m%=100%)d t
k+1 and thus prevents existing of \Phi t (d) minima far away from rt.
The idea we propose was checked by computational experiments with the hot mirror. First we performed
20 computational manufacturing experiments with 1 % transmittance errors, the four­second time intervals
between measurements, and without a stabilization of the on­line characterization algorithm. More than 50 %
of those experiments were unsuccessful because of high production errors connected with the instability of the
on­line characterization algorithm. After that, another series of 20 experiments with the algorithm stabilization
was performed. In these experiments we specified the same level of errors in transmittance data and the same
time intervals between measurements as before.
A stabilizer was taken in the form of equation (3) and the deposition rates for both the thin film materials
were specified with 5% errors as compared to the mean rate values set in the software module simulating the
coating deposition (see Section 2). Such a specification models a practical situation, because material deposition
rates are never known precisely. In equation (4) for the regularization parameter ff, we put m equal to 20%.
This means that rather large deviations of d from rt were allowed by the stabilizer. In spite of this fact, all 20
deposition experiments were successful, which confirms the efficiency of the stabilization approach proposed.
4. Conclusions. We believe that the idea presented in this paper has a promising future. It enables
developing computationally efficient and stable characterization algorithms for practically all modern production
environments, because material deposition rates can usually be estimated with a high accuracy. Algorithms
based on this idea can be very flexible. It is possible to apply the idea of stabilization only to those layers that
are not reliably controlled by optical means. It is possible to reveal such layers with the use of computational
manufacturing experiments. For example, numerous computational experiments show that in the case of the
above­considered hot mirror such layers are the layers labeled with the numbers 34, 39, and 40.
Because of a high stability of modern deposition processes, it has become possible to control depositions
of some layers by measuring their deposition times. This approach is now often applied to the control of very
thin layers or those layers whose thickness variations produce only insignificant variations of transmittance and
reflectance. Monitoring of some coating layers by time can be included into the general algorithmic scheme
considered in this paper by specifying appropriate values of the parameter ff for these layers.
The idea proposed is the most natural one for combining optical monitoring data and quartz crystal
monitoring data in a single characterization scheme. It can be used not only for the characterization of multilayer
optical coatings but also for the on­line and off­line characterization of coatings with variable refractive index

Numerical Methods and Programming, 2005, Vol. 6 (http://num­meth.srcc.msu.su) 117
profiles (also referred to as rugate coatings) when optical monitoring data and quartz crystal monitoring data
are available simultaneously.
References
1. Kaiser N., Pulker H.K. Some fundamentals of optical thin film growth // Optical interference coatings / edited by
N. Kaiser and H.K. Pulker. Berlin: Springer­Verlag, 2003. 59--80.
2. Tikhonravov A.V., Trubetskov M.K., DeBell G. Application of the needle optimization technique to the design of
optical coatings // Appl. Opt. 1996. 35. 5493--5508.
3. Sullivan B.T., Dobrowolski J.A. Implementation of a numerical needle method for thin­film design // Appl. Opt.
1996. 35. 5484--5492.
4. Tikhonravov A.V., Trubetskov M.K., Amotchkina T.V., Kokarev M.A. Key role of the coating total optical thickness
in solving design // SPIE Proceedings. 2003. 5250. 312--321.
5. Tikhonravov A.V. Design of optical coatings // Optical interference coatings / edited by N. Kaiser and H.K. Pulker.
Berlin: Springer­Verlag, 2003. 81--104.
6. Macleod H.A. Thin film optical filters. New York: McGraw­Hill, 1986.
7. Tikhonravov A.V., Trubetskov M.K. Computational manufacturing as a bridge between design and production //
Submitted to Appl. Opt. 2005.
8. Tikhonravov A.V., Trubetskov M.K. Automated design and sensitivity analysis of wavelength­division multiplexing
filters // Appl. Opt. 2002. 41. 3176--3182.
9. Tikhonravov A.V., Trubetskov M.K., Thelen A., DeBell G. Thin film telecommunication filters: automated design
and pre­production analysis of WDM filters // Proceedings of 2002 IEEE/LEOS Workshop on Fibre and Optical
Passive Components. Piscataway, NJ: IEEE, 2002. 202--207.
10. Tikhonravov A.V., Trubetskov M.K. OptiLayer thin film software // http://www.optilayer.com
11. Tikhonravov A.V., Trubetskov M.K. On­line characterization and reoptimization of optical coatings // SPIE Pro­
ceedings. 2003. 5250. 406--413.
12. Furman Sh., Tikhonravov A.V. Basics of optics of multilayer systems // http://www.optilayer.com
13. Tikhonravov A.V., Trubetskov M.K. OptiReOpt software // http://www.optilayer.com
Received 14 April 2005