[Q] systematic error
What is systematic error? Can it be modeled statistically? Is it random? Is it fixed? Is it a bias? Is it …?
Astronomers sometimes call it measurement error. Statisticians, therefore, will dig in materials under the topic of measurement error. Don’t be fooled by the name. Statisticians’ measurement error generally corresponds to astronomers’ statistical error, not systematic error. This systematic error could be related to nuisance parameters, profile likelihoods, hierarchical Bayesian modeling, or a fixed bias (empirically known or other source provides its estimate – because of resourcing external information, it may be considered from a meta data analysis viewpoint) from a statistics perspective although astronomers have their sophisticatedly developed ways to treat this error. So far, unfortunately, there is no golden rule to include systematic error in the modeling process. Can you help me to get a better sense about the definition of systematic error with a statistical language? What would be your answer to the question at the top?
vlk:
Hyunsook, I am puzzled that you are asking this question. Is this not already answered in your HEAD2008 poster? Or in the forthcoming SPIE2008 paper?
btw, I don’t think it is correct to say that “Astronomers sometimes call it measurement error” — we don’t do that except by mistake. Measurement error invariably refers to the statistical uncertainty that attaches to the data. Systematic errors are attached to the calibration.
06-21-2008, 8:30 amhlee:
I probably read too many astro-ph preprints. The phrase, systematic errors occurred too often in various places, not just in x-ray astronomy, which confuses me. The only error I know before reading astro-ph was so called statistical errors having stochastic nature that can be modeled. To distinguish non statistical errors, I think astronomers use the term systematic error or measurement error. My understanding of measurement errors is the additional columns adjacent to measurement columns in catalogs which often come from calibration, not from repeated measures nor from bayesian modeling.
06-24-2008, 6:56 pmvlk:
addition columns adjacent to measurement columns in catalogs which often come from calibration, not from repeated measures nor from bayesian modeling
These numbers are invariably derived from data, and therefore represent some kind of model parameter. Errors on the model parameters are most definitely affected by both statistical (aka measurement) and systematic errors. People may sometimes misuse the term “measurement error” to include the errors that describe such parameter uncertainties, but by and large, I think astronomers are quite clear on what is measurement error: it is the uncertainty in the measurement, i.e., a measure of how well you know the data, and thus is entirely statistical in nature. A proper understanding of the data will of course require a knowledge of the systematic effects that may affect the model that is being used to match to the data.
It may be that all the confusion in terminology is just a Bayesian vs Frequentist thing. When in doubt, think Bayesian!
07-01-2008, 2:05 pmhlee:
Those errors, next to measurement columns should be treated with either hyperparameters in Bayesian or nuisance parameters in Frequentist but not the sigma in normal distribution presumed to explain statistical errors, unless those measurement errors in the catalog coincide with statistical uncertainty, such as stochastic deviation of observations with respect to the model. I checked books and papers on measurement errors and asked some friends who are specialized in measurement error models but was not able to get any example similar to the measurement error columns in the astronomical catalogs. Both systematic error and measurement error are very cross cultural things but statistically are not justified well.
Now I become curious how Bayesian handles heteroscedasticity? We know measurement errors are not homogeneous.
07-02-2008, 2:10 pmhlee:
There is a page about systematic error in Wikipedia! It is put closer to bias than uncertainty; bias to be calibrated so that it can be removed. Someone has to add more.
07-02-2008, 8:19 pm