Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://hea-www.harvard.edu/AstroStat/Stat310_0910/dr_20100323_mle.pdf
Äàòà èçìåíåíèÿ: Mon Mar 22 20:30:21 2010
Äàòà èíäåêñèðîâàíèÿ: Tue Oct 2 05:31:38 2012
Êîäèðîâêà:
Maximum Likelihood Estimation and the Bayesian Information Criterion
Donald Richards Penn State University

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 1/3


The Method of Maximum Likelihood
R. A. Fisher (1912), "On an absolute criterion for fitting frequency cur ves," Messenger of Math. 41, 155­160 Fisher's first mathematical paper, written while a final-year undergraduate in mathematics and mathematical physics at Gonville and Caius College, Cambridge University Fisher's paper star ted with a criticism of two methods of cur ve fitting: the method of least-squares and the method of moments It is not clear what motivated Fisher to study this subject; perhaps it was the influence of his tutor, F. J. M. Stratton, an astronomer

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 2/3


X : a random variable is a parameter f (x; ): A statistical model for X X1 , . . . , Xn : A random sample from X

We want to construct good estimators for The estimator, obviously, should depend on our choice of f

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 3/3


Protheroe, et al. "Inter pretation of cosmic ray composition ­ The path length distribution," ApJ., 247 1981
X : Length of paths

Parameter : > 0 Model: The exponential distribution, f (x; ) = -1 exp(-x/), x > 0 Under this model, E (X ) =
¯ Intuition suggests using X to estimate ¯ X is unbiased and consistent

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 4/3


LF for globular clusters in the Milky Way
X : The luminosity of a randomly chosen cluster

van den Bergh's Gaussian model,
(x - µ)2 1 ex p - f (x) = 2 2 2 µ: Mean visual absolute magnitude : Standard deviation of visual absolute magnitude ¯ X and S 2 are good estimators for µ and 2 , respectively

We seek a method which produces good estimators automatically: No guessing allowed

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 5/3


Choose a globular cluster at random; what is the "chance" that the LF will be exactly -7.1 mag? exactly -7.2 mag? For any continuous random variable X , P (X = x) = 0 Suppose X N (µ = -6.9, 2 = 1.21), i.e., X has probability density function
(x - µ)2 1 ex p - f (x) = 2 2 2

then P (X = -7.1) = 0 However, . . .

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 6/3


1 (-7.1 + 6.9)2 f (-7.1) = ex p - 2(1.1)2 1.1 2

= 0.37

Inter pretation: In one simulation of the random variable X , the "likelihood" of obser ving the number -7.1 is 0.37
f (-7.2) = 0.28

In one simulation of X , the value x = -7.1 is 32% more likely to be obser ved than the value x = -7.2
x = -6.9 is the value with highest (or maximum) likelihood; the prob. density function is maximized at that point

Fisher's brilliant idea: The method of maximum likelihood

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 7/3


Return to a general model f (x; ) Random sample: X1 , . . . , Xn Recall that the Xi are independent random variables The joint probability density function of the sample is
f (x1 ; )f (x2 ; ) · · · f (xn ; )

Here the variables are the X 's, while is fixed Fisher's ingenious idea: Reverse the roles of the x's and Regard the X 's as fixed and as the variable

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 8/3


The likelihood function is
L(; X1 , . . . , Xn ) = f (X1 ; )f (X2 ; ) · · · f (Xn ; )

Simpler notation: L()
^ , the maximum likelihood estimator of , is the value of where L is maximized ^ is a function of the X 's

Note: The MLE is not always unique.

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 9/3


Example: "... cosmic ray composition ­ The path length distribution ..."
X : Length of paths

Parameter : > 0 Model: The exponential distribution,
f (x; ) =
-1

exp(-x/),

x>0

Random sample: X1 , . . . , Xn Likelihood function:
L() = f (X1 ; )f (X2 ; ) · · · f (Xn ; ) =
-n

¯ exp(-nX /)

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 10/3


Maximize L using calculus It is also equivalent to maximize ln L:
¯ ln L() is maximized at = X ^ ¯ Conclusion: The MLE of is = X

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 11/3


LF for globular clusters: X N (µ, 2 ), with both µ, unknown
(x - µ)2 f (x; µ, 2 ) = ex p - 2 2 2 2 1

A likelihood function of two variables,
L(µ, 2 ) = f (X1 ; µ, 2 ) · · · f (Xn ; µ, 2 ) = 1 (2 2 )n
/2

ex p -

1 2

n

2 i=1

(Xi - µ)2

Solve for µ and 2 the simultaneous equations: ln L = 0, ln L = 0 2) µ ( Check that L is concave at the solution (Hessian matrix)

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 12/3


Conclusion: The MLEs are n 1 ¯ ¯ µ = X , 2 = ^ ^ (Xi - X )2 n
i=1

µ is unbiased: E (µ) = µ ^ ^ 2 is not unbiased: E ( 2 ) = ^ ^
n- 1 2 n

=

2 2

For this reason, we use S 2

n 2 n- 1 ^

instead of ^

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 13/3


Calculus cannot always be used to find MLEs Example: "... cosmic ray composition ..." Parameter : > 0 Model: f (x; ) =
exp(-(x - )), 0, x x<

Random sample: X1 , . . . , Xn
L() = f (X1 ; ) · · · f (Xn ; ) = exp(- 0,
n i=1

(Xi - )),

all Xi otherwise

^ = X(1) , the smallest obser vation in the sample

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 14/3


General Proper ties of the MLE
^ may not be unbiased. We often can remove this bias by ^ multiplying by a constant. ^ For many models, is consistent. ^ The Invariance Proper ty: For many nice functions g , if is the ^ MLE of then g () is the MLE of g (). ^ The Asymptotic Proper ty: For large n, has an approximate normal distribution with mean and variance 1/B where 2 ln f (X ; ) B = nE

The asymptotic proper ty can be used to construct large-sample confidence inter vals for

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 15/3


The method of maximum likelihood works well when intuition fails and no obvious estimator can be found. When an obvious estimator exists the method of ML often will find it. The method can be applied to many statistical problems: regression analysis, analysis of variance, discriminant analysis, hypothesis testing, principal components, etc.

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 16/3


The ML Method for Linear Regression Analysis
Scatter plot data: (x1 , y1 ), . . . , (xn , yn ) Basic assumption: The xi 's are non-random measurements; the yi are obser vations on Y , a random variable Statistical model:
Yi = + xi + i , i = 1, . . . , n

Errors 1 , . . . , n : A random sample from N (0, 2 ) Parameters: , ,
2

Yi N ( + xi , 2 ): The Yi 's are independent

The Yi are not identically distributed; they have differing means

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 17/3


The likelihood function is the joint density of the obser ved data
n

L(, , ) =
i=1

2

(Yi - - xi )2 ex p - 2 2 2 2 1
n n/2

= (2 2 )-

ex p -

i=1

(Yi - - xi )2 2
2

2

Use calculus to maximize ln L w.r.t. , , The ML estimators are:
^ = 1 2 = ^ n
n i=1 n

¯ (xi - x)(Yi - Y ) ¯ , n 2 ¯ i=1 (xi - x) (Yi - - xi )2 ^^

^¯ ¯ = Y - x ^

i=1

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 18/3


The ML Method for Testing Hypotheses
X N (µ, 2 )

Model: f (x; µ, ) =

2

1 2

2

ex p -

(x-µ) 2 2

2

Random sample: X1 , . . . , Xn We wish to test H0 : µ = 3 vs. Ha : µ = 3 The space of all permissible values of the parameters = {(µ, ) : - < µ < , > 0}
H0 and Ha represent restrictions on the parameters, so we are led to parameter subspaces 0 = {(µ, ) : µ = 3, > 0}, a = {(µ, ) : µ = 3, > 0}

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 19/3


Construct the likelihood function 1 1 2 ex p - L(µ, ) = 2 )n/2 2 (2

n 2 i=1

(Xi - µ)2

Maximize L(µ, 2 ) over 0 and then over a The likelihood ratio test statistic is
max L(µ, 2 ) =
0

max L(3, 2 ) =
>0 µ=3, >0

max L(µ, 2 )
a

max L(µ, 2 )

Fact: 0 1

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 20/3


L(3, 2 ) is maximized over 0 at 1 2 = n
n i=1

(Xi - 3)2
n

max L(3, 2 ) =L 3,
0

1 n i=1

(Xi - 3)2
n i=

=

2 e

n (Xi - 3)2 1

n/2

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 21/3


L(µ, 2 ) is maximized over a at ¯ µ = X, 1 2 = n
n i=1

¯ (Xi - X )2
n 1 n i=1

¯ max L(µ, 2 ) =L X ,
a

¯ (Xi - X )2 n ¯ (Xi - X )2 1
n/2

=

2 e

n i=

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 22/3


The likelihood ratio test statistic:

2/n

= =

2 e
n i=1

n i=

n  (Xi - 3)2 2 e 1
n i=1

n i=

n ¯2 1 (Xi - X )

¯ (Xi - X )2 Â

(Xi - 3)2

¯ is close to 1 iff X is close to 3 ¯ is close to 0 iff X is far from 3 is equivalent to a t-statistic

In this case, the ML method discovers the obvious test statistic

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 23/3


The Bayesian Information Criterion
Suppose that we have two competing statistical models We can fit these models using the methods of least squares, moments, maximum likelihood, . . . The choice of model cannot be assessed entirely by these methods By increasing the number of parameters, we can always reduce the residual sums of squares Polynomial regression: By increasing the number of terms, we can reduce the residual sum of squares More complicated models generally will have lower residual errors

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 24/3


BIC: Standard approach to model fitting for large data sets The BIC penalizes models with larger numbers of free parameters Competing models:f1 (x; 1 , . . . , Random sample: X1 , . . . , Xn Likelihood functions: L1 (1 , . . . , BIC = 2 ln
m1 m1

) and f2 (x; 1 , . . . , m2 )

) and L2 (1 , . . . , m2 )

L1 (1 , . . . , m1 ) - (m1 - m2 ) ln n L2 (1 , . . . , m2 )

The BIC balances an increase in the likelihood with the number of parameters used to achieve that increase

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 25/3


^ Calculate all MLEs i and ^ L1 (1 , . . . BIC = 2 ln ^ L2 (1 , . . .

^ i and the estimated BIC: ^ , m1 ) - (m1 - m2 ) ln n ^m ) , 2

General Rules: BIC < 2: Weak evidence that Model 1 is superior to Model 2
2 BIC 6: Moderate evidence that Model 1 is superior 6 < BIC 10: Strong evidence that Model 1 is superior

BIC > 10: Ver y strong evidence that Model 1 is superior

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 26/3


Competing models for GCLF in the Galaxy
1. A Gaussian model (van den Bergh 1985, ApJ, 297) 1 (x - µ)2 f (x; µ, ) = ex p - 2 2 2 2. A t-distn. model (Secker 1992, AJ 104) ( +1 ) (x - µ)2 2 g (x; µ, , ) = 1+ 2 ( 2 )
- < µ < , > 0, > 0

-

+1 2

In each model, µ is the mean and 2 is the variance In Model 2, is a shape parameter

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 27/3


We use the data of Secker (1992), Table 1 We assume that the data constitute a random sample
ML calculations suggest that Model 1 is inferior to Model 2 Question: Is the increase in likelihood due to larger number of parameters? This question can be studied using the BIC Test of hypothesis
H0 : Gaussian model vs. Ha : t- model

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 28/3


Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 29/3


Model 1: Write down the likelihood function,
1 L1 (µ, ) = (2 2 )n
/2

1 ex p - 2

n 2 i=1

(Xi - µ)2

¯ µ = X , the ML estimator ^ 2 = S 2 , a multiple of the ML estimator of ^ ¯ L1 (X , S ) = (2 S 2 )-
n/2 2

exp(-(n - 1)/2)

For the Milky Way data, x = -7.14 and s = 1.41 ¯ Secker (1992, p. 1476): ln L1 (-7.14, 1.41) = -176.4

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 30/3


Model 2: Write down the likelihood function
n

L2 (µ, , ) =
i=1

(Xi - µ)2 1+ 2 ( 2 ) ( )

+1 2

-

+1 2

Are the MLEs of µ, 2 , unique? No explicit formulas for them are known; we evaluate them numerically Substitute the Milky Way data for the Xi 's in the formula for L, and maximize L numerically
^ Secker (1992): µ = -7.31, = 1.03, = 3.55 ^ ^

Secker (1992, p. 1476): ln L2 (-7.31, 1.03, 3.55) = -173.0

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 31/3


Finally, calculate the estimated BIC: With m1 = 2, m2 = 3, n = 100
L1 (-7.14, 1.41) - (m1 - m2 )n BIC =2 ln L2 (-7.31, 1.03, 3.55) = - 2.2

Apply the General Rules on p. 26 to assess the strength of the evidence that Model 1 may be superior to Model 2. Since BIC < 2, we have weak evidence that the t-distribution model is superior to the Gaussian distribution model. We fail to reject the null hypothesis that the GCLF follows the Gaussian model over the t-model

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 32/3


Concluding General Remarks on the BIC The BIC procedure is consistent: If Model 1 is the true model then, as n , the BIC will determine (with probability 1) that it is. In typical significance tests, any null hypothesis is rejected if n is sufficiently large. Thus, the factor ln n gives lower weight to the sample size. Not all information criteria are consistent; e.g., the AIC is not consistent (Azencott and Dacunha-Castelle, 1986). The BIC is not a panacea; some authors recommend that it be used in conjunction with other information criteria.

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 33/3


There are also difficulties with the BIC Findley (1991, Ann. Inst. Statist. Math.) studied the performance of the BIC for comparing two models with different numbers of parameters: "Suppose that the log-likelihood-ratio sequence of two models with different numbers of estimated parameters is bounded in probability. Then the BIC will, with asymptotic probability 1, select the model having fewer parameters."

Maximum Likelihood Estimation and the Bayesian Infor mation Criterion ­ p. 34/3