Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.stsci.edu/stsci/meetings/irw/proceedings/stetsonp.dir/section3_3.html
Дата изменения: Mon Apr 18 18:37:08 1994 Дата индексирования: Sun Dec 23 20:17:55 2007 Кодировка: Поисковые слова: п п п п п п п п п п п |
Hanisch provided me with the ``right'' answers for the stars in the simulated cluster, so it's easy to determine the error of each magnitude determination: find the star in the truth file that falls nearest to the position of each star in my photometric output, and compute the difference between their magnitudes. The question of how to encapsulate these magnitude differences in an overall figure of merit for the reduction procedure deserves some thought.
The root-mean-square residual is the theoretician's dream for parameterizing the width of a probability distribution. It can be shown from first principles that, subject to a few assumptions (such as that the total errors consist of an infinite number of infinitesimal contributions), the error distribution is driven in the limit to the Gaussian distribution characterized by two parameters: the mean (which I set identically to zero by forcing my magnitude zero-point to agree with that in the simulations), and the standard deviation. The root-mean-square deviation can be shown from first principles to be the most ``efficient'' estimator of the standard deviation.
That doesn't necessarily mean that the root-mean-square residual is the best figure of merit for our present purposes. In real life, photometric errors are not always made up of an infinite number of infinitesimal contributions: detector blemishes, cosmic rays, satellite trails, and any number of other problems can represent single, discrete, macroscopic error sources. Every observational astronomer knows that three-sigma residuals occur far more often than the Gaussian formula predicts. Even in the simulated images, two stars in adjacent pixels will probably be perceived by the software as a single, brighter star lying between the pixels. Then, when the output photometry is compared with the ``truth,'' the measured magnitude will be compared with one of the two input magnitudes, and a large photometric error will be inferred. Is this a fault of the photometric algorithm? Or of the subsequent cross-identification procedure? Or is it simply a fact of life that sometimes astronomers have to be satisfied with composite photometry for unresolved doubles? Since these occasional major mistakes are squared before they go into the root-mean-square estimator of the standard deviation, they overwhelm the majority of smaller residuals. In a case like the present one, all the root-mean-square residual will tell you is whether in this particular data set your largest screw-up is a six-sigma blunder or a seven-sigma howler - it won't tell you anything about the bulk of the data.
Examining the formula for the Gaussian distribution, one can come up with several other ways of estimating the standard deviation:
whence = 1.2533 (mean absolute residual).
Obviously, there exists an infinity of such statistics, each one capable of providing an estimate of exactly the same Gaussian that the root-mean-square deviation is intended to measure. These alternatives are less efficient than the root-mean-square when operating on the theoretician's idealized continuous distribution, but they may be preferable when it comes to dealing with real data. Often when an astronomer is trying to determine the location of the principal sequences of some star cluster in the color-magnitude diagram, the goal is to obtain the sharpest possible sequence for the bulk of the stars, regardless of the few oddballs that may be scattered by photometric blunders to remote regions of the CMD. The five alternative estimators that I have listed above, beginning with the r.m.s., proceed in order from those that are most influenced by the far tail of the error distribution, to those that measure primarily the width of the core of that distribution. Lest I later be accused of fraud, let me say right now that for the remainder of this paper I use the last estimator: the difference between the and weighted percentiles.
What do I mean by a weighted percentile? Imagine that for each observation I take a stick of wood and cut it to a length proportional to the weight of that observation (weight = the inverse square of the standard error as estimated from the readout noise, the photon noise, and the goodness of the profile fit). On each stick I write the value of that observation. Then I sort the sticks in increasing order of the values written on them and arrange them into a continuous bar. Then I point at positions 31%and 69%of the way along the total length of the bar and read off the two observational values that I'm pointing at. The difference between them is my estimate of .
This figure of merit is obviously dominated by the highest-weight observations, i.e., the brightest stars. I think that is legitimate. The purpose of this exercise is to test the quality of the photometric software. The errors of the faintest stars are dominated by their Poisson and read noise, not by any failures of the algorithm. If I were to generate an unweighted estimate of from the sample as a whole, I could practically make my results come out as good as I want: all I'd have to do is manage to find fewer and fewer of the faint stars, and the generated from the stars that I did find would get better and better.
The figure of merit used here is reasonable, in that it tests the quality of the photometric algorithm by seeing what it can do with the best data rather than measuring the amount of synthetic noise added to the many faint stars; and it is impartial, in that it is difficult for me to produce a better or poorer figure of merit by changing which stars I want to consider ``recovered'' as opposed to ``lost.'' In short, it measures how well the algorithm does with a typical, well-exposed star.