Документ взят из кэша поисковой машины. Адрес оригинального документа : http://hea-www.harvard.edu/AstroStat/astro193/AS_slides_march11.pdf
Дата изменения: Wed Mar 11 20:58:18 2015
Дата индексирования: Sun Apr 10 12:11:46 2016
Кодировка:

Поисковые слова: южная атлантическая аномалия
Model Selection
Astro193, March 9



Steps in Hypothesis Testing
1/ Set up 2 possible exclusive hypotheses - two models: M0 ­ null hypothesis ­ formulated to be rejected M1 ­ an alternative hypothesis, research hypothesis 2/ Specify a priori the significance level 3/ Choose a test which: - has the required power - approximates the conditions - finds what is needed to obtain the sampling distribution and the region of rejection, whose area is a fraction of the total area in the sampling distribution 3/ Run test: reject M0 if the test yields a value of the statistics whose probability of occurrence under M0 is <


Classical Test Statistics
· Likelihood Ratio Test
Ratio of likelihood values:

· F-test
For Gaussian data the statistic follows F distribution

·

Tests only valid if
· The models are nested · Not on the boundary of the parameter space · Asymptotic limit has been reached


Example: Addition of the Emission Line

5%

Actual false/positive

2

Simulated null

Need to Calibrate Test Statistics


Monte Carlo Simulations
· · Simulat line Steps: · · · · · · · ions to test for more complex models, e.g. addition of an emission Fit the observed data with both models, M0, M1 Obtain distributions for parameters Assume a simpler model M0 for simulations Simulate/Sample data from the assumed simpler model Fit the simulated data with simple and complex model Calculate statistics for each fit Build the probability density for assumed comparison statistics, e.g. LRT and calculate p-value

Example: Visualization, here accept more complex model, p-value 1.6%



Classical Model Selections
· · · · 2 - goodness of fit test F-test Likelihood Ratio Tests AIC - Akaike Information Criterion
Given ML for a set of models. The model with the largest value provides the best description of the data. Need to incorporate number of model parameters. The model with the lowest AIC value is the best model.

K - number of model parameters N - number of data points

2 - assuming Normality

finite sample correction


Bayesian Model Selection
· Odds Ratio
M2,M1 - models
l

· Bayes Factors · BIC - Bayesian Information Criterion

· DIC - Deviance Information Criterion

Var[D()] Model complexity


Example

Kelly et al 2014, ApJ 788,33


Bayesian Model Comparison
Bayes' theorem can also be applied to model comparison:
p ( M | D) = p (M )


p( D | M ) . p ( D)

p(M) is the prior probability for M; p(D) is an ignorable normalization constant; and p(D | M) is the average, or global, likelihood, evidence:

p ( D | M ) = " d! p (! | M ) p( D | M , ! ) = " d! p (! | M ) L(M ,! ).
In other words, it is the (normalized) integral of the posterior distribution over all parameter space. Note that this integral may be computed numerically


Bayesian Model Comparison
To compare two models, a Bayesian computes the odds, or odds ratio:

where B21 is the Bayes factor. When there is no a priori model, B21 = 1 of one indicates that each model is correct, while B21 10 may be considered sufficient to a model (although that number should be greater if the controversial).

p r e fe r e n c e fo r e i th e r e q u a lly lik e ly to b e c c e p t th e a lte rn a tiv e alternative model is