Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.stsci.edu/software/tinytim/tinytim_faq.html
Дата изменения: Sat Apr 1 01:27:21 2000
Дата индексирования: Sat Dec 22 03:01:54 2007
Кодировка:

Поисковые слова: dark nebula
Tiny Tim Frequently Asked Questions
[Tiny Tim Page]

Tiny Tim Frequently Asked Questions


Can I use Tiny Tim models in programs like DAOPHOT?

This is actually the most frequently asked question. Most profile-fitting photometry programs use non-physical models, and thus they cannot be readily adapted to use model PSFs. So far I haven't heard of any success stories.


Tiny Tim doesn't know about the linear ramp or methane quad filters, or some other filter. Is there some way around this?

Some filters are not included since they are "weird" or cover a range of separate wavelengths, like the linear ramps. However, you can generate purely monochromatic PSFs at any given wavelength, which will do nicely for these narrow filters. When Tiny Tim asks for the filter name, just enter "none" or "mono". It will then ask for the wavelength to use (in nanometers). As of version 4.2, you can also provide your own filter weighting curves. See the manual for details.


How can I adjust the focus of the Tiny Tim models?

The focus parameter (Z4) is an entry in the parameter file produced by tiny1. It is given in waves of defocus at 547 nm (for historical reasons). One micron of motion in the HST secondary mirror results in a 0.011 wave difference in focus. If the secondary mirror is moved closer to the primary, the value is negative. You can change the number to whatever you wish. Typically, no more than 5 microns of secondary motion is seen (for the current cameras). Note that the WFPC2 PC and WFC models have a field dependent focus term, which can be turned off by setting the appropriate flag in the parameter file.


Can Tiny Tim model the NICMOS coronographic mode?

No. That would seriously complicate the code.


When modelling WFPC2 or FOC+COSTAR, why doesn't Tiny Tim ask for the date of the observation, like it does for WF/PC-1 or FOC?

The date of observation was used on the aberrated cameras to determine the correct focus position due to desorption and mirror moves. Back then, desorption was significant and required frequent compensation by moving the HST secondary mirror. However, it leveled off enough by the time the repaired cameras were working that it was no longer important. The mirror is now adjusted roughly every six months. Between times of adjustment, the uncertainties in focus caused by breathing and pointing-dependent effects are greater than defocus caused by desorption.


When I subtract a Tiny Tim model from an observed WFPC2 PSF, there are large residuals in the diffraction spikes. What can be done?

The diffraction spikes consist of alternating bright and dark bands extending out from the star. The positions and intensities of these bands are extremely sensitive to a number of factors, including the position of the star, the focus of the telescope, the object color, etc. The diffraction spikes in Tiny Tim models are poor matches to the observed ones. Observed PSFs have the same problem. Thus, there is no way to get around this problem. You should never use the spiders to normalize a PSF when subtracting, whether it is an observed or model PSF.


What's the deal with the pixel scattering function in WFPC2?

The pixels of a CCD are not separate bins, isolated from their neighboring pixels. They are defined in one direction by the electric fields of the electrodes, which are implanted on the surface of the silicon layer. In the other direction there are channel stops. Neither of these "barriers" is completely impervious. When a photon strikes within a pixel, it is converted to an electron, which is attracted to the electrode. However, the electric field in some parts of the pixel is weak enough that some electrons can move into an adjacent pixel. At shorter wavelengths, photons can also be scattered by the electrode structure before they are converted to electrons. Some of the photons are simply scattered away and never detected, and some electrons may be absorbed before reaching the electrodes. Either way, some charge leaks into other pixels while a lesser amount is completely lost, effectively blurring the image. This is called charge diffusion. Photons striking in the center of a pixel have a better chance of being detected in that pixel, while those striking near the edges have a greater chance of being detected in an adjacent pixel. This means that there is essentially a subpixel response function (which I call the pixel scattering function.

In front-side illuminated CCDs like those used in WFPC2, the electrode structure is on the visible side. At short wavelengths (< 450 nm) photons hitting near the pixel edges are scattered by the electrodes. At long wavelengths (> 800 nm) the photons are not converted until they penetrate far down into the silicon layer, away from the well-defined electric fields of the electrodes near the surface. This means that more electrons are likely to migrate to other pixels. Between these wavelength ranges, photons are less likely to be scattered by the electrodes and are converted closer to the surface, where stronger electric fields exist. Thus, blurring is expected to be worse at short and long wavelengths, and optimal around visual wavelengths. In back-side illuminated CCDs (like those in WF/PC-1 and the upcoming Advanced Camera) the electrodes are not exposed, so short wavelength light is not scattered. At longer wavelengths, photons are converted deep in the silicon layer, near the electrodes.

CCDs like those used in WFPC2 have been tested in the lab at JPL in order to characterize the subpixel response. This was done by placing a grid of pinholes over the CCD and illuminating it with a flat light. The pinholes were much smaller than a pixel and were stepped to sample many phases across the pixels. The distribution of signal in surrounding pixels was measured for each pinhole, along with the position of the pinhole. From these measurements, the response function was determined. Unfortunately, electronics problems affected the signal, reducing the significantly reducing the accuracy of the experiment.

These tests, along with some estimates from on-orbit images, showed that the response function is best at around 500 nm, where the peak throughput is about 75% in the pixel center. At shorter and longer wavelengths it decreases to about 50%.

Tiny Tim convolves normally sampled PSFs with a 3 by 3 pixel kernel to simulate the blurring caused by these effects. This kernel has a peak of 75%, so it is just about right for visual wavelengths. The same kernel is used for the other wavelengths too, so one can expect those PSFs to be a little bit too sharp. In reality, the correct way to do this is to convolve a subsampled response function to a subsampled PSF and then sample the result, accounting for the correct positioning of the PSF. However, the response function is not known well enough to do this.