Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.sao.ru/precise/Midas_doc/doc/95NOV/vol2/node277.html
Дата изменения: Fri Feb 23 12:57:53 1996 Дата индексирования: Tue Oct 2 17:56:25 2012 Кодировка: Поисковые слова: m 5 |
After interpolating any meteorological values that exist, you may have to subtract dark current and/or sky values. If there are no data for dark current, the reduction program will complain but continue. Sky values may have already been subtracted (e.g., in CCD measurements or other data marked as RAWMAG instead of SIGNAL.)
DARK CURRENT: Dark current is usually a strong function of detector temperature. If you have regulated the detector temperature, then only a weak time dependence might be expected --- perhaps only a small difference from one night to the next, or a weak linear drift during each night. You will have the choice of the model to be used in interpolating dark values.
If the detector temperature is measured, you should look for a temperature dependence that is the same for all nights. The program will show you all the dark data, with a separate symbol for each night, as a function of temperature. If all nights seem to show the same behavior, you can then fit a smoothing function of temperature to the dark values. You can choose a constant, a simple exponential (i.e., a straight line in the plot of logDARK) vs. temperature), or the sum of two exponential terms.
Although in principle these ought to be of the form , the range of temperature available is usually insufficient to distinguish between this and a simple term. Furthermore, though the temperatures ought to be absolute temperatures in Kelvins, you may have only some arbitrary numbers available, which might even assume negative values. In this case, an attempt to fit the correct physical form would blow up, but the simple exponential term might still give reasonable results. So the simpler form is actually used.
If the plot of log(DARK) vs. temperature bends up at the ends, or at least at the right end, you should be able to get a good two-term fit. If it looks linear, you can just fit a single line. You also have the option of adopting a single mean value.
On the other hand, if the data are not consistent from night to night, or show a temperature dependence that is different from the expected form, or if you have no temperature information at all, you may have to interpolate the dark data simply as a function of time. As with the weather data, you have a choice of polygon, linear, or constant fits. Remember that a polygon fit uses every datum, right or wrong, and so is not robust.
After removing a temperature-dependent fit, you will see the remaining residuals plotted as a function of time. This provides a double check on the adequacy of dark subtraction.
SKY SUBTRACTION: Sky data must be treated separately for each night and passband. Here, your options are more numerous. You can choose the usual linear or constant fits; but those are likely to be a poor representation of sky brightness.
More conventional choices are to use either the preceding or following sky for each star observation, or the ``nearest'' sky (in which both time and position separations are used to decide what ``nearest'' means). Linear interpolation between successive sky measurements (i.e., a polygon fit) is also an option. These choices, while conventional, are not robust. They are sensitive to gross errors in sky data, such as star observations that have been marked as sky by mistake.
One might argue that bad sky data will stand out in the plots discussed below, and that careful users will remove them and re-reduce their data. One might also argue that really bad sky values will cause the stellar data to be discarded or down-weighted in the later reductions, so that a robust fit at this stage is not absolutely necessary. However, such arguments are not completely convincing. Therefore a more elaborate sky subtraction option is available, which tries to model the sky brightness, discriminating against outlying points in a robust regression.
To help you choose the best method, the program displays three plots of sky brightness against different independent variables: time, airmass, and lunar elongation. In the time plot, the times of moonrise and moonset are marked, and twilight data are marked t; Figure shows an example. In the other two plots, points with the Moon above the horizon are marked with the letter M, points with the Moon below the horizon are marked by a minus sign, and twilight data are marked t. In these and other plots, the characters ^ on the top line or v on the lower edge indicate points outside the plotting area; and $ indicates multiple overlapping points. You can re-display the plots if you want to look at them again before deciding which sky-subtraction method to use.
Figure: Plot of sky brightness as a function of time
Note that no one method is best for all circumstances. While modelling the sky should work well under good conditions, there are certainly cases in which it will fail.
For example, when using DC or charge-integration equipment, an observer commonly uses the same gain setting for both (star+sky) and sky alone. This is perfectly appropriate, as it makes any electrical zero-point error cancel out in taking the difference. But often the limited precision available --- for example, a 3-digit digital voltmeter --- means that the sky brightness is measured with a precision of barely one significant figure when bright stars are observed. If a bright star reading is 782 and the sky alone is 3, one does not have much information to use in modelling the sky.
Another case where one does better to subtract individual sky readings is observations made during auroral activity. While one would prefer not to use such data, because of the rapidity of sky variations, they must sometimes be used. Here again, subtraction of the nearest sky reading is better than using a model, because the rapid fluctuations are not modelled. Likewise, when terrestrial light sources around the horizon make the sky brightness change rapidly with azimuth and/or time, no simple sky model would be adequate.
If it is necessary to make measurements of some objects through two or more different focal-plane diaphragms, these measurements cannot be combined directly. Ordinarily, all observations to be reduced together should be measured through the same aperture, because the instrumental system changes in an unpredictable way with aperture size. Even the sky measurements are not exactly proportional to the diaphragm area. However, it may be possible to reduce program objects observed with a non-standard aperture as if they were measured through the standard one, and then apply a suitable transformation after the fact. This means that a sufficient number of calibration measurements of stars having a considerable range in color must be taken, using both aperture sizes, to determine the transformation between the two instrumental systems. In such cases, individual sky readings taken through the same apertures must be used in the reductions. The reduction program will complain if you try to intermix data taken through different diaphragms, and data taken with a peculiar aperture will be rejected if there are no corresponding sky measurements.
Finally, when very faint stars are observed (as in setting up secondary standards for use with a CCD), so that the sky is a large fraction of the star measurement, it may be necessary to subtract individual sky readings simply because the model used is not sufficiently accurate. The model is reasonably good, but is not good enough to produce estimates free of systematic error.
In any case, the plots of sky vs. time, airmass, and lunar elongation should prove useful in assessing the quality of the sky data, and in choosing the best subtraction strategy. Furthermore, the residuals from the sky model may be useful in identifying bad sky measures that should be removed; so it is a good idea to run the sky model, even if you decide not to subtract its calculated values from the star data.