Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.adass.org/adass/proceedings/adass99/P2-40/
Дата изменения: Wed Oct 11 05:19:30 2000
Дата индексирования: Tue Oct 2 06:30:50 2012
Кодировка:
Cosmic Ray Rejection and Data Compression for NGST Next: Innovative Cosmic Ray Rejection in ISOCAM Data
Up: Data and Image Processing
Previous: The Removal of Periodic Read-Out Patterns from Science Frames
Table of Contents - Subject Index - Author Index - PS reprint -

Fixsen, D. J., Hanisch, R. J., Mather, J. C., Nieto-Santisteban, M. A., Offenberg, J. D., Sengupta, R., & Stockman, H. S. 2000, in ASP Conf. Ser., Vol. 216, Astronomical Data Analysis Software and Systems IX, eds. N. Manset, C. Veillet, D. Crabtree (San Francisco: ASP), 539

Cosmic Ray Rejection and Data Compression for NGST

D. J. Fixsen1, R. J. Hanisch2, J. C. Mather3, M. A. Nieto-Santisteban4, J. D. Offenberg5, R. Sengupta6, H. S. Stockman7

Abstract:

We present an algorithm to sift through multiple reads of an image and find and reject cosmic ray events and other glitches. The resulting image is then compressed first with a lossy compression algorithm and then by a lossless compression algorithm. The final compression ratio is of order 4n (n is number of reads) for simulated data.This order of data compression is required to fit the NGST data into the anticipated downlink bandwidth. The computational requirements are modest, showing the key limitation may be the bus from the A-$>$D converter to the computer rather than the computation itself.

1. Description of Algorithm

The algorithms introduced in Offenberg et al. (1999) and Nieto-Santisteban et al. (1999) have been optimized to reduce computer requirements and improve performance. The process uses uniformly sampled non-destructive reads. Uniform sampling reduces the readout bandwidth and smoothes the detector thermal load. The pixels are processed independently to simplify the program and to guarantee that errors are not correlated across pixels.

First, saturated data are marked and not used. Next, for each pixel, the set of reads (64) is fit to a straight line. The interval with largest deviation from the line (either direction) is compared with the expected noise. If it is larger than $4.5\sigma$ (optimum for test case) the interval is not used in the fit and the process is repeated. Most of the processing time is used by the cosmic ray rejection.

Next, a weighted fit is applied to the remaining data. The optimum fit depends on the signal. High signal uncertainties are dominated by photon (electron) counting noise and the optimum fit weights the endpoints. Low signal uncertainties are dominated by readout noise and the optimum fit is uniform weighting. We calculate the weights for these and 6 intermediate signal/noise ratios and chose the best weighting scheme for the signal. By computing the weight table for all 8 signal/noise levels ($<1$ sec), and all possible segment lengths we save time in the weighted fit.

After the fit, we reduce the dynamic range and equalize the noise for the different pixels by finding the square root of the slope plus an offset, which compensates for the readout noise. Finally, an adjustable scaling allows retention of nb bits of noise after conversion to an integer. Thus N (64) 16 bit reads are converted to a single 8 bit byte. This is further compressed without loss (see Nieto-Santisteban et al. 1999) to approximately 4 bits per pixel (if we keep 2 bits of noise).

The final results are robust. Even integration times that lead to most of the pixels being affected by cosmic rays can be effectively cleaned allowing longer integration times than are practical with Fowler sampling.

2. Effectiveness

Figure 1: Left: 10000 s integration final read Right: processed image.
\begin{figure}
\epsscale{0.8}
\plotone{P2-40f1.eps}
\end{figure}

Figure 1 demonstrates the effectiveness of cosmic ray rejection. The cosmic rays in the raw image (or, equivalently, in a Fowler-sampled image) make long integration ($>1000$ sec) undesirable with NGST. If we take non-destructive samples every 30 seconds during the integration, it is possible to get a clean image of long integration times, with high photometric reliability. The images shown in Figure 1 are stretched to the same grey-scale and assume a cosmic ray rate of 4 event/sec/cm$^2$, which is the low end estimate for cosmic ray rates.

3. Photometry

The lower pair of plots in Figure 2 compare photometry for Fowler sampling and Uniform sampling with on-board processing for five 2000 second integrations. The resulting images are processed by throwing out the outliers and using IRAF DAOPHOT on the final image. The Fowler sampled data is still contaminated with CR, which is expected as the ideal integration time is shorter. But the ideal integration time is longer for uniform sampling with on-board deglitching. The upper pair of plots in Figure 2 compare the optimum twenty 500 sec Fowler sampled images with a single 10000 sec uniform sampled image after deglitching. In all cases, a total of 320 samples were executed for 10000 seconds of observation. The uniform sampled image is higher quality even though the downlink is only 1/40 as large (after data compression).

Figure 2: Error comparison of Uniform and Fowler sampling.
\begin{figure}
\epsfxsize=1.\hsize
\epsfbox{P2-40f2.eps}
\end{figure}

4. Cosmic Ray Rejection Program


void cr_rej(//Reject cosmic rays and perform linear fit of data.
float **values,                   //Input: Data cube (MxM xNumimg)
int nr, int nc, int N,            //Input: data cube dimensions
int Full,                         //Input: count for full-well
image *data, image *err){         //Output:Image,  CR count
  register int t,b,k; int p,T[M];
  register float s,x,y; float z,*R,*S,*U,*W;
  for(p=0;p<nr*nc;p++){R=values[p];                  //all pixels
    if(R[1]>Full){s=0;b=N;} else {                   //bad pixel
      t=n;b=0;while(R[t--]>Full);while((T[b++]=++t)<n);//Saturated
      s=(R[*T]-*R)/ *T;                              //Average sig
      while(1){x=b=0;W=R;do{S=R+T[b++];              //all segmnts
          while(W!=S)if((y*=y=s+*W-*++W)>x){U=W;x=y;}//worst dif^2
          }while(W++<R+n);                           //full list
        if(x<(s+VP)*Kp)break;                        //No More CR
        s+=(s-*U+*--U)/(n-b);t=U-R;                  //zap CR sig
        while((T[b]=T[b-1])>t&&--b);T[b]=t;}         //file new CR
      for(k=K;k>0&&s<SVals[--k];);W=WT[k][*T];       //S/N ratio
      if(b<3)for(U=W+N,s=F;W!=U;s+=*R++* *W++);      //0,1 CR, Fit
      else{for(s=y=t=b=0;t<n;){U=W+T[b]-t+1;         //all segmnts
          z=WT[k][N-2-T[b]+t][N]+W[N];y+=W[N];       //Row Weights
          for(x=0;W<U;x+=*R++* *W++);                //sum segment
          s+=x*z;t=T[b++]+1;W=WT[k][T[b]-t];}s=s/y+F;}}//sum Sig
    err->setval(p,--b);                              //Record # CR  
    data->setval(p,(s>0)?int(np*sqrt(s)):0);}}       //Record data

5. Timing Measurements

As Table 1 indicates even though the processing (including CR rejection) of this algorithm takes longer than the processing of Fowler sampled data, the total time is dominated by IO and in particular input. If the input speed can be raised to the data rate the remaining processing for either the Fowler sampling or this algorithm is accommodated on a modest computer.

Table 1: Uniform w/CR vs. Fowler.
  CR Processed Fowler Processed
Total input:    10.7 GB 10.7 GB
Max data rate:    10 MB/sec 100 MB/sec
Input time:    5170 sec (162 op/Pix) 5170 sec (162 op/Pix)
Process time:    1150 sec (36 op/Pix) 200 sec (6 op/Pix)
CR Identification: 725 sec (23 op/Pix)  
Weighted Fit: 420 sec (13 op/Pix)  
Compression time: 70 sec (2 op/Pix)  
Output time:    100 sec (3 op/Pix) 250 sec (8 op/Pix)
Total output:    50 MB 128 MB
Total time:    6440 sec (203 op/Pix) 5600 sec (176 op/Pix)

6. Acknowledgements

These studies are supported by the NASA Remote Exploration and Experimentation Project (REE), which is administered at the Jet Propulsion Laboratory under Dr. Robert Ferraro, Project Manager.

References

The Consultative Committee for Space Systems, 1997, Lossless Data Compression. CCSDS Blue Book 121.0-B-1

The NGST Study Team, 1997, The Next Generation Space Telescope: Visiting a Time When Galaxies Were Young, ed. H.S. Stockman. Available at http://oposite.stsci.edu/ngst/initial-study

Im, M., Stockman, H. S. 1998, Science with the NGST, ASP Conference Series, 133, 263, eds. E.P. Smith, A. Koratkar

Nieto-Santisteban, M. A. et al. 1999, in ASP Conf. Ser., Vol. 172, Astronomical Data Analysis Software and Systems VIII, ed. D. M. Mehringer, R. L. Plante, & D. A. Roberts (San Francisco: ASP), 137

Offenberg, J. D. et al. 1999, in ASP Conf. Ser., Vol. 172, Astronomical Data Analysis Software and Systems VIII, ed. D. M. Mehringer, R. L. Plante, & D. A. Roberts (San Francisco: ASP), 141

Stockman, H. S. et al. 1998, Cosmic Ray Rejection and Image Processing Aboard the Next Generation Space Telescope, NGST Workshop (in press). Available at http://ngst.gsfc.nasa.gov/public/doc_172_2/index.html



Footnotes

... Fixsen1
Raytheon ITSS
... Hanisch2
Space Telescope Science Institute
... Mather3
NASA Goddard Space Flight Center
... Nieto-Santisteban4
Space Telescope Science Institute
... Offenberg5
Raytheon ITSS
... Sengupta6
Raytheon ITSS
... Stockman7
Space Telescope Science Institute

© Copyright 2000 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA
Next: Innovative Cosmic Ray Rejection in ISOCAM Data
Up: Data and Image Processing
Previous: The Removal of Periodic Read-Out Patterns from Science Frames
Table of Contents - Subject Index - Author Index - PS reprint -

adass@cfht.hawaii.edu