Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://hea-www.harvard.edu/~bradw/cv/papers/BayesHR.pdf
Äàòà èçìåíåíèÿ: Mon Nov 27 20:24:00 2006
Äàòà èíäåêñèðîâàíèÿ: Tue Oct 2 07:01:43 2012
Êîäèðîâêà: Windows-1251

Ïîèñêîâûå ñëîâà: ð ð ñ
The Astrophysical Journal, 652:610Y628, 2006 November 20
# 2006. The American Astronomical Society. All rights reserved. Printed in U.S.A.

A

BAYESIAN ESTIMATION OF HARDNESS RATIOS: MODELING AND COMPUTATIONS
Taeyoung Park,1 Vinay L. Kashyap,2 Aneta Siemiginowska,2 David A. van Dyk,3 Andreas Zezas, Craig Heinke,4 and Bradford J. Wargelin2
Received 2005 December 24; accepted 2006 June 14
2

ABSTRACT A commonly used measure to summarize the nature of a photon spectrum is the so-called hardness ratio, which compares the numbers of counts observed in different passbands. The hardness ratio is especially useful to distinguish between and categorize weak sources as a proxy for detailed spectral fitting. However, in this regime classical methods of error propagation fail, and the estimates of spectral hardness become unreliable. Here we develop a rigorous statistical treatment of hardness ratios that properly deals with detected photons as independent Poisson random variables and correctly deals with the non-Gaussian nature of the error propagation. The method is Bayesian in nature and thus can be generalized to carry out a multitude of source-populationYbased analyses. We verify our method with simulation studies and compare it with the classical method. We apply this method to real-world examples, such as the identification of candidate quiescent low-mass X-ray binaries in globular clusters and tracking the time evolution of a flare on a low-mass star. Subject headingg methods: statistical -- stars: flare -- X-rays: binaries s: Online material: color figures

1. INTRODUCTION The shape of the spectrum of an astronomical source is highly informative as to the physical processes at the source. However, often detailed spectral fitting is not feasible due to various constraints, such as the need to analyze a large and homogeneous sample for population studies, or there being insufficient counts to carry out a useful fit. In such cases, a hardness ratio, which requires the measurement of accumulated counts in two or more broad passbands, becomes a useful measure to quantify and characterize the source spectrum.5 A hardness ratio is defined as either the ratio of the counts in two bands called the ``soft'' and ``hard'' bands or a monotonic function of this ratio. We consider three types of hardness ratios, S ; H S color; C log10 ; H H ÀS ; fractional diAerence; HR H þS simple ratio; R

ð

where S and H are the source counts in the two bands, called the soft and hard passbands. The simple formulae above are mod1 Department of Statistics, Harvard University, One Oxford Street, Cambridge, MA 02138; tpark@stat.harvard.edu. 2 Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138; kashyap@head.cfa.harvard.edu, aneta@head.cfa.harvard.edu, azezas@ head.cfa.harvard.edu, bwargelin@head.cfa.harvard.edu. 3 Department of Statistics, University of California, 364 ICS Building One, Irvine, CA 92697-1250; dvd@ics.uci.edu. 4 Northwestern University, 2131 Tech Drive, Evanston, IL 60208; cheinke@ northwestern.edu. 5 In the context of Chandra data, the passbands often used are called ``soft '' (0.3Y 0.9 keV ), ``medium'' (0.9 Y2.5 keV ), and ``hard'' (2.5 Y 8.0 keV ) bands. Sometimes the soft and medium bands are merged into a single band, also called the soft band (e.g., 0.5 Y 2 keV ). In all cases, note that the energies refer to the nominal detector PHA or PI channel boundaries and not to the intrinsic photon energies. The choice of energy or PI channel range is flexible and situational.

ified for the existence of background counts and instrumental effective areas. Spectral colors in optical astronomy, defined by the standard optical filters (e.g., UBVRIJK; U À B, R À I , etc.), are well known and constitute the main observables in astronomy. They have been widely applied to characterize populations of sources and their evolution. The hardness ratio concept was adopted in X-ray astronomy for early X-ray detectors (mainly proportional counter detectors), which had only a limited energy resolution. The first application of the X-ray hardness ratio was presented in the X-ray variability studies with SAS 3 observations by Bradt et al. (1976). Zamorani et al. (1981) investigated the X-ray hardness ratio for a sample of quasars observed with the Einstein X-ray observatory. They calculated each source intensity in two energy bands (1.2Y3 and 0.5Y1.2 keV ) and discussed a possible evolution of the hardness ratio (the ratio of both intensities) with redshift for their X-ray sample of 27 quasars. Similar ratios have been used in surveys of stars ( Vaiana et al. 1981), galaxies (e.g., Kim et al. 1992), and X-ray binaries ( Tuohy et al. 1978) studied with Einstein and earlier observatories. In the case of X-ray binaries in particular, they were used to define two different classes of sources ( Z-track and atoll sources; Hasinger & van der Klis 1989) depending on their time evolution on HR versus HR diagrams. Since then the concept of the X-ray hardness ratio has been developed and broadly applied in a number of cases, most recently in the analysis of large data samples from the Chandra X-Ray Observatory ( Weisskopf et al. 2000) and XMM-Newton (Jansen et al. 2001). Advanced X-ray missions such as Chandra and XMM-Newton allow for the detection of many very faint sources in deep X-ray surveys (see review by Brandt & Hasinger 2005). For example, in a typical observation of a galaxy, several tens and in some cases even hundreds of sources are detected, most of which have fewer than 50 counts, and many have only a few counts. Similar types of sources are detected in the ChaMP (Chandra Multiwavelength Project) serendipitous survey ( Kim et al. 2006). They provide the best quality data to date for studying source populations and their evolution. The most interesting sources are the 610


HARDNESS RATIOS IN POISSON REGIME sources with the smallest numbers of detected counts, because they have never been observed before. Furthermore, the number of faint sources increases with improved sensitivity limits; i.e., there are more faint sources in deeper observations. Because these faint sources have only a few counts, hardness ratios are commonly used to study properties of their X-ray emission and absorption (e.g., Alexander et al. 2001; Brandt et al. 2001a, 2001b; Giacconi et al. 2001; Silverman et al. 2005). With long observations, the background counts increase, so the background contribution becomes significant and background subtraction fails to work. Still, the background contribution must be taken into account to evaluate the source counts. The sensitivity of X-ray telescopes is usually energy dependent, and thus the classical hardness ratio method can only be applied to observations made with the same instrument. Usually the measurements carry both detection errors and background contamination. In a typical hardness ratio calculation the background is subtracted from the data, and only net counts are used. This background subtraction is not a good solution, especially in the low-counts regime (see van Dyk et al. 2001). In general, the Gaussian assumption present in the classical method (see xx 1.1 and 3) is not appropriate for faint sources in the presence of a significant background. Therefore, the classical approach to calculating the hardness ratio and the errors is inappropriate for low counts. Instead, adopting a Poisson distribution as we do here (see below), hardness ratios can be reliably computed for both low- and high-counts cases. The Bayesian approach allows us to include the information about the background, difference in collecting area, effective areas, and exposure times between the source and background regions. Our Bayesian model-based approach follows a pattern of ever more sophisticated statistical methods that are being developed and applied to solve outstanding quantitative challenges in empirical astronomy and astrophysics. Bayesian model-based methods for high-energy high-resolution spectral analysis can be found, for example, in Kashyap & Drake (1998), van Dyk et al. (2001), van Dyk & Hans (2002), Protassov et al. (2002), van Dyk & Kang (2004), Gillessen & Harney (2005), and Park et al. (2006). More generally, the monographs on statistical challenges in modern astronomy ( Feigelson & Babu 1992, 2003; Babu & Feigelson 1997) and the special issue of Statistical Science devoted to astronomy ( May 2004) illustrate the wide breadth of statistical methods applied to an equal diversity of problems in astronomy and astrophysics. Here, we discuss the classical method and present the fully Bayesian method for calculating the hardness ratio for counts data.6 1.1. The Classical Method A conventional hardness ratio is calculated as set out in equations (1), where S and H are the ``soft'' and ``hard'' counts accumulated in two nonoverlapping passbands. In general, an observation will be contaminated by background counts, and this must be accounted for in the estimate of the hardness ratios. The background is usually estimated from an annular region surrounding the source of interest, or from a suitable representative region on the detector that is reliably devoid of sources. The differences in the exposure time and aperture area of source and background observations are summarized by a known constant r, for which the expected background counts in the source exposure area are
6 A Fortran- and C-based program that calculates the hardness ratios following the methods described in this paper is available for download from http:// hea-www.harvard.edu /AstroStat / BEHR /.

611

adjusted. With the background counts in the soft band (BS) and the hard band (BH ) collected in an area of r times the source region, the hardness ratio is generalized to S À BS =r ; H À B H =r S À BS =r C ? log10 H À BH =r ( H À B H =r ) À ( S HR ? ( H À B H =r ) þ ( S R?

; À BS =r) : À BS =r) ð

The adjusted counts in the background exposure area are directly subtracted from those in the source exposure area. The above equations can be further modified to account for variations of the detector effective areas by including them in the constant r, in which case the constants for the two bands will be different.7 Errors are propagated assuming a Gaussian regime, i.e., sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 2 S þ BS =r 2 H þ BH =r 2 S À BS =r ; R ? þ H À BH =r (S À BS =r)2 ( H À B H =r ) 2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 2 S þ BS =r 2 H þ BH =r 2 1 ; C ? þ ln (10) (S À BS =r) 2 (H À BH =r) 2 h 2 HR ? 2 (H À BH =r) 2 (S þ 2 S =r 2 ) B i1=2 2 2 þ (S À BS =r) 2 (H þ BH =r 2 ) Â ÃÀ2 ; (H À BH =r) þ (S À BS =r) ; ð 3Þ where S , H , BS ,and BH are typically approximated with the Gehrels prescription (Gehrels 1986) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð 4Þ X % X þ 0:75 þ 1; pffiffiffiffi where X is the observed counts and the deviation from X is due to identifying the 68% Gaussian (1 ) deviation with the 16th Y84th percentile range of the Poisson distribution. In addition to the approximation of a Poisson with a faux Gaussian distribution, classical error propagation also fails on a fundamental level to properly describe the variance in the hardness ratios. Because R is strictly positive, its probability distribution is skewed, and its width cannot be described simply by R. The fractional difference HR is better behaved, but the range limits of ?À1; þ1 cause the coverage rates (see x 3) to be incorrect. The color C is the best behaved, but the error range will be asymmetrical when the errors are large. Moreover, it has been demonstrated (van Dyk et al. 2001) that using background-subtracted source counts (which is the case in the method outlined above) gives a biased estimate of the source intensity, especially for very faint sources. Furthermore, although we can account for gross differences in the detector sensitivity between observations, this is not done in a statistically correct manner, but instead simply by rescaling the background-subtracted counts in each band using additional precomputed correction factors. Finally, the classical hardness ratio method cannot be applied when a source is not
Gross differences in detector sensitivity, e.g., between different observations or for sources at different off-axis angles, can be accounted for with additional multiplicative factors modifying the background-subtracted counts. This is, however, not statistically meaningful and can lead to unreliable estimates and error bars in the small counts regime. Instead, we apply such scaling factors directly to the source intensities (see eq. [5]).
7


612

PARK ET AL.

Vol. 652

detected in one of the two bands (which is quite common, since CCD detectors are more sensitive in the soft band ). An alternative method based on the median (or some quantile) of the distribution of the energies of the detected counts in a source was suggested by Hong et al. (2004). This method can be very powerful for classifying faint sources, but is not suited for comparisons of spectral properties between observations with different instruments. In this method, because the spectral parameter grids are highly nonlinear and multivalued in quantile-quantile space, it is also necessary that the spectral shape of the source being analyzed be known in order to interpret the numerical values. We have therefore developed fully Bayesian approaches to appropriately compute hardness ratios and their errors, by properly accounting for the Poissonian nature of the observations. The value of our method is heightened in several astrophysically meaningful cases. In x 2 we describe the model structure we adopt. In x 3 we carry out simulations to compare our method with the classical method. In x 4 we outline various applications of our method; we demonstrate its use in identifying candidate quiescent low-mass X-ray binary sources in a globular cluster and in studying the evolution of a stellar X-ray flare. In x 5 we discuss some nuances of our method, such as the adopted prior distributions, as well as the advantages and limitations. A summary of the method and its advantages is presented in x 6. A detailed mathematical derivation of the posterior probability distributions of the hardness ratios and the computational methods we use to calculate them are described in Appendix A, and a detailed comparison of the effects of priors is described in Appendix B. 2. MODELING THE HARDNESS RATIOS 2.1. The Poisson Case Hardness ratios are often used to describe faint sources with very few counts; it is not uncommon for either or both of the soft and hard counts to be small enough that the source is undetected in that band. The classical method (x 1.1) generally fails to produce reliable estimates (or upper and lower limits) in such cases, and indeed, it is sometimes not even possible to apply it, such as when the background is large and simple subtraction results in a negative estimate for the source counts. Thus, instead of using Gaussian assumptions on the detected counts, we directly model them as an inhomogeneous Poisson process. Since the observed counts are the sum of counts due to the source and the background, and because the sum of two independent Poisson random variables is a Poisson random variable with the sum of two Poisson intensities, we may write8 S $ Poisson(eS (kS þ S )); H $ Poisson(eH (kH þ H )); ð

The observed background counts are also modeled as independent Poisson random variables, BS $ PoissonðreS S Þ; BH $ PoissonðreH H Þ ð

where is scaled by a known correction factor r, to account for differences in source and background areas and sensitivities. We implicitly assume throughout that the observed data are discrete photon events, or counts. The intensities k and may have other units, determined by how the instrument efficiency is supplied. Thus, if eS is given in counts s cm2 photonÀ1,then kS will have units of photons sÀ1 cmÀ2. Alternately, eS could describe the exposure time in seconds, in which case kS will have units of counts sÀ1. In the case in which kS and kH themselves have units of counts, eS and eH are dimensionless and usually taken to be unity. This allows us to compute hardness ratios of fluxes and count rates as well as counts. We place no restriction on the units of the model intensities (other than that k and have the same units); they are entirely determined by the problem at hand. Thus, our analysis is valid for any instrumental configuration or combination, and the only requirement is that the observed data be described by a Poisson distribution. Given the expected source counts intensities (kS and kH), it is legitimate for the hardness ratio to be rewritten as simple ratio; R? kS ; kH

kS color; C ? log10 ; kH kH À kS : fractional diAerence; HR ? kH þ kS



ð

These quantities are characteristics of the spectrum in question, rather than of the observed data, like the quantities in equation (2). As such, the rewritten hardness ratio is of more direct scientific interest. Note also that while these quantities are defined entirely in terms of expected source intensities in the different passbands, the background is implicitly taken into account via direct modeling, as in equations (5) and (6). 2.2. Bayesian Approach As a new method of evaluating hardness ratios, we adopt a Bayesian approach. Bayesian analysis is explained in detail in numerous articles in the astronomical literature (e.g., Gregory & Loredo 1992; Kashyap & Drake 1998; van Dyk et al. 2001; Kashyap et al. 2002; Protassov et al. 2002), and here we describe it only in rudimentary terms. In this approach, the prior knowledge for a parameter [the ``prior probability distribution,'' e.g., p(kS ;S )] is combined with the information in the observed data [the ``likelihood,'' e.g., p(S ; BS jkS ;S )] to derive the probability distribution of the model parameters [the ``posterior distribution,'' e.g., p(kS ;S jS ; BS )] via Bayes's theorem,10 i.e., p(kS ;S jS ; BS ) ? RR p(kS ;S )p(S ; BS jkS ;S ) ; p(kS ;S )p(S ; BS jkS ;S ) d S d kS ð

where S and H are independent data points, kS and kH are the expected soft and hard source counts intensities, S and H are the expected soft and hard background counts intensities, and eS and eH are correction factors that take into account variations in effective areas, exposure times, and other instrumental effects.9
8

Note that throughout this paper, Greek letters indicate unobserved quantities and Roman letters denote observed data values (except for R , C, and HR , which are used to represent the hardness ratio model parameters in the Poisson case). 9 Note that in general the effective area changes within an adopted passband, so the chosen values of the correction factor are averages that are calculated after making some prior assumptions about the underlying source spectrum. Their values may change depending on the analysis results.

10 The notation p(AjB) is to be read as ``the probability that A is true given that B is true.'' Thus, except for prior distributions, all the probability distributions that we use are conditional distributions.


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME

613

and similarly for p(kH ;H jH ; BH ). Because the posterior distribution in equation (8) needs only to be computed up to a normalizing constant, we often deal with an unnormalized posterior density, i.e., p(kS ;S jS ; BS ) / p(kS ;S )p(S ; BS jkS ;S ). Bayesian statistical inferences for a parameter are based on the posterior distribution, so we consider its various summary statistics; see x 2.2.1 for details. Because the underlying likelihood functions are Poisson distributions, we assign so-called conjugate -prior distributions for both source and background intensities (van Dyk et al. 2001); a conjugate prior distribution ensures that a posterior distribution follows the same parametric form as the prior distribution, e.g., a -distribution is a conjugate prior for a Poisson likelihood.11 That is, we assign independent -prior distributions for the source and background counts k and , kS $ ( S $ (
S S
1 3

with ! ? kS þ kH . The integrals can be computed using either Monte Carlo integration or Gaussian quadrature (throughout this paper, we use the term ``quadrature'' to refer exclusively to the latter). We use both methods of integration, because neither has the clear advantage over the other in our case (see x 5.4 and Appendix A).
2.2.1. Finding Point Estimates and Error Bars

; ;

S2 S4

) ; kH $ ( ); H $ (

H1 H
3

; ;

H2 H4

); );

ð 9Þ ð10Þ

where the values of are calibrated according to our prior knowledge, or lack thereof, about the parameters; see x 5.1 for our discussion on choosing a prior distribution. We describe the application of Bayes's theorem and the derivation of the posterior probability distribution functions for the interesting parameters in Appendix A. Here, we note that since kS and kH are independent of each other, their joint posterior probability distribution is the product of the individual marginal posterior distributions,12 written as (see eq. [A8]) p(kS ; kH jS ; H ; BS ; BH ) ? p(kS jS ; BS ) ; p(kH jH ; BH ): ð11Þ

Then, the posterior distribution of each type of hardness ratio can be computed by transforming kS and kH to the appropriate variables and marginalizing over the resulting nuisance variable, as follows: p(R jS ; H ; BS ; BH ) d R ? Z dR d kH kH p(R kH ; kH jS ; H ; BS ; BH );
k
H

ð12Þ

p(CjS ; H ; BS ; BH ) d C ? Z d kH kH 10C ln (10)p(10C kH ; kH jS ; H ; BS ; BH ); dC
k
H

ð13Þ

p(HR jS ; H ; BS ; BH ) d HR ? Z ! (1 À HR )! (1 þ HR )! ; d HR d ! p S ; H ; BS ; BH ; 2 2 2 ! ð14Þ
11

Formally, these are semiconjugate prior distributions in that they are conjugate to a particular conditional distribution. For example, the -prior distribution on kS is conjugate to the Poisson distribution of the source counts among the observed soft counts, denoted S in x A1. The (; ) distribution is a continuous distribution on the positive real line with probability density function p(X ) ? 1/ À( ) X À1 eÀ X and has a mean of / and a variance of / 2 for ; > 0. 12 The marginal posterior distribution of interest is computed by integrating the so-called nuisance parameters out of their joint posterior distributions. For R example, p(kS jS ; BS ) ? p(kS ;S jS ; BS ) d S .

Bayesian statistical inferences are made in terms of probability statements, and thus we consider various summary statistics for a posterior distribution. Here we briefly describe the standard summaries of location and methods of computing error bars; a fuller description along with detailed examples can be found in Park et al. (2006). Commonly used summaries of location are the mean, median, and mode(s) of the distribution: the posterior mean is the posterior expectation of the parameter, the posterior median is the point that divides the posterior probability evenly such that 50% of the probability resides below and above its value, and the posterior mode is the most likely value of the parameter given the data. When a posterior simulation of size N is obtained with a Monte Carlo sampler (see x A1), the posterior mean is the simple average of the sample draws, and the posterior median is the [N/2]th draw after sorting the draws in increasing order. To compute the posterior mode, we repeatedly bisect the Monte Carlo draws and choose the half with more draws until the range of the chosen half is sufficiently narrow. The midpoint of the resulting narrow bin approximates the posterior mode. With quadrature (see x A2), we can obtain equally spaced abscissas and the corresponding posterior probabilities. Thus, the posterior mean is computed as the sum of the product of the abscissas with their probabilities (i.e., the dot product of the vector of the abscissa with the corresponding probability vector). The posterior median is computed by summing the probabilities corresponding to the ordered abscissas one by one until a cumulative probability of 50% is reached. The posterior mode is simply the point among the abscissas with the largest probability. Unlike with the point estimates above, there is no unique or preferred way to summarize the variation of a parameter. Any interval that encompasses a suitable fraction of the area under the probability distribution qualifies as an estimate of the variation. Of these, two provide useful measures of the uncertainty. The equal-tail posterior interval, which is the central interval that corresponds to the range of values above and below which lies a fraction of exactly /2 of the posterior probability, is a good measure of the width of the posterior distribution. The highest posterior density ( HPD) interval, which is the range of values that contain a fraction 1 À of the posterior probability and within which the probability density is never lower than that outside the interval, always contains the mode and thus serves as an error bar on it. For a symmetric, unimodal posterior distribution, these two posterior intervals are identical. The equal-tail interval is invariant to one-to-one transformations and is usually easier to compute. However, the HPD interval always guarantees the interval with the smallest length among the intervals with the same posterior probability. Neither of these intervals is necessarily symmetric around the point estimate; i.e., the upper and lower bounds may be of different lengths. For consistency, we refer to such intervals as ``posterior intervals''; others also refer to them as confidence intervals13 or credible intervals.
13 Technically, confidence intervals are defined in terms of how frequently they contain the ``true'' value of the estimand, rather than in terms of a posterior probability distribution.


614

PARK ET AL.

Vol. 652

Fig. 1.-- Comparison of the shape of the posterior distribution for hardness ratios R , C, and HR with the classical Gaussian approximation for high counts (case I; left column) and for low counts (case II; right column). The solid lines represent the posterior distributions of the hardness ratios, and the dashed lines, a Gaussian distribution with the mean and standard deviation equal to the classical estimates. A prior distribution that is flat on the real line ( ? 1; see x 5.1) is adopted for the Bayesian calculations. Note that as the number of counts increases, the two distributions approach each other. At low counts the two distributions differ radically, with the classical distributions exhibiting generally undesirable properties ( broad and extending into unphysical regimes).

For a Monte Carlo simulation of size N, we compute either the equal-tail interval or an interval that approximates the HPD interval. ( Unless otherwise stated, here we always quote the equaltail interval for the Monte Carlo method and the HPD interval for the quadrature.) The equal-tail posterior interval in this case is computed by choosing the [( /2)N ]th and the ?(1 À /2)N th draws as the boundaries. An approximate HPD interval is derived by comparing all intervals that consist of the [X ]th and the ?X þ (1 À )N th draws and choosing that X which gives the shortest length among them. When the posterior density is computed by the quadrature, we split parameter space into a number of bins and evaluate the posterior probability at the midpoint of each bin. In this case, a 100(1 À )% HPD interval can be computed by beginning with the bin with the largest posterior probability and adding additional bins down to a given value of the probability density until the resulting region contains at least a fraction 1 À of the posterior probability.

3. VERIFICATION: SIMULATION STUDY In order to compare the classical method with our Bayesian method, we carried out a simulation study to calculate coverage rates of the classical and posterior intervals. Given prespecified values of the parameters, source and background counts were generated and then used to construct 95% classical and posterior intervals of each hardness ratio using the methods discussed in this article. From the simulated data we calculated the proportion of the computed intervals that contain the true value of the corresponding hardness ratio. ( This is the coverage rate of the classical and probability intervals.) In the ideal case, 95% of the simulated intervals would contain the true value of each hardness ratio. Besides the coverage rate, the average length of the intervals and the mean square error of point estimates were also computed and ^ compared. The mean square error of the point estimate of is defined as the sum of the variance and squared bias for an estimator, ^ ^ ^ ^ i.e., MSE() ? E(( À )2 ) ? Var() þ?E() À 2 . A method


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME
TABLE 1 St atistical P roperties o f Bayesi an and Cl assi cal Met hods. Mean Square Errora Case Method Classical method Hardness Ratio R C HR R C HR R C HR R C HR R C HR R C HR Coverage Rate (%) 95.0 99.0 98.5 96.0 96.0 96.0 94.5 96.0 94.5 97.5 100.0 100.0 98.0 98.0 98.0 97.0 99.5 95.0 Mean Length 1.24 0.54 0.61 1.07 0.44 0.49 1.03 0.43 0.49 192.44 6.02 3.70 15.77 1.54 1.26 8.18 1.51 1.23 Of Mode Of Mean

615

Case I ( high counts) ..........................

0.065 0.013 0.016 0.056Ã 0.012Ã 0.016 0.057Ã 0.012Ã 0.016 93.27 0.27 0.21 0.328Ã 0.078Ã 0.181 0.394Ã 0.074Ã 0.187 0.069 0.012 0.015Ã 0.069 0.012 0.015Ã

Monte Carlo method

Quadrature

Case II ( low counts) ..........................

Classical method

Monte Carlo method

Quadrature

85.482 0.113 0.083Ã 20.338 0.112 0.083Ã

a The quantity with the lower mean square error is marked with an asterisk; this shows that the mode is the robust estimator for the color and the simple ratio, while the mean is preferred for the fractional difference.

that constructs shorter intervals with the same coverage rate and produces a point estimate with a lower mean square error is generally preferred. The entire simulation was repeated with different magnitudes of the source intensities, kS and kH. Intrinsically, we are interested in the following two prototypical cases: case I, hardness ratios for high counts, i.e., kS ? kH ? 30;S ? H ? 0:1; case II, hardness ratios for low counts, i.e., kS ? kH ? 3;S ? H ? 0:1: ð15bÞ ð15aÞ

In both cases we adopt a background area to source area ratio of r ? 100 (see eq. [6]); i.e., we take the observed counts in the background region to be 10. Note that these are written with reference to the counts observed in the ``source region,'' i.e., the units of kS , kH , S ,and H are all counts (source area)À1,and that we have set eS ? eH ? 1 here. The actual extent of the source area is irrelevant to this calculation. This simulation study illustrates two typical cases, i.e., highcounts and low-counts sources: case I represents high-counts sources for which Poisson assumptions tend to agree with Gaussian assumptions, and case II represents low-counts sources, where the Gaussian assumptions are inappropriate. In case I, the posterior distributions of the hardness ratios agree with the corresponding Gaussian approximation of the classical method, but in case II, the Gaussian assumptions made in the classical method fail. This is illustrated in Figure 1, where we compare the two methods for a specific and particular simulation: we assume S ? H ? 30 in case I and S ? H ? 3 in case II, compute the resulting posterior distributions of hardness ratios using the Bayesian method, and compare them to a Gaussian distribution with a mean and standard deviation equal to the classical estimates. The right panels in Figure 1 show that there is a clear discrepancy between the two methods. In order to compare the statistical properties of estimates based on the Bayesian and classical methods, we simulate test

data under both case I and case II. Table 1 presents the results of 200 simulated sources for each of the two cases. The data are used to compute point estimates and 95% error bars by using the classical method, Monte Carlo sampler, and numerical integration by quadrature. The Bayesian methods use noninformative prior distributions on k and ; in particular, k $ (1; 0) and $ (0:5; 0) (see x 5.1). The results of case I indicate that all three methods are comparable, although the quadrature yields slightly shorter intervals and point estimates with a smaller mean square error. Figure 2 illustrates that the resulting intervals from the methods are almost identical. On the other hand, the results of case II appear in Figure 3 and confirm that the Bayesian methods outperform the classical method, because the mean lengths of the classical intervals are wider than the posterior intervals, and the classical estimates exhibit a much larger mean square error. For example, the hardness ratio HR by definition is limited to the range ?À1; þ1, so that the maximum length of the intervals should be two. However, the mean length of simulated classical intervals is 3.7, which is clearly larger than needed. Since quadrature tends to yield estimates with smaller mean square errors and shorter posterior intervals while maintaining high coverage rates, it is generally preferred to the Monte Carlo method. However, this comes at a price: because the summation inside the posterior density is over the detected source counts, quadrature is computationally more expensive as the source counts increase; on the other hand, the Monte Carlo method is much faster regardless of the number of source counts. Thus, we recommend using quadrature with relatively low counts data (e.g., S < 20 or H < 20) and the Monte Carlo method with higher counts. Because the Bayesian inference for each hardness ratio is based on its posterior distribution, we consider these two point estimates: the posterior mode and the posterior mean. Table 1 presents a comparison of the two proposed estimates in terms of their mean square errors. In the case of HR , the posterior mean seems to be more robust to the shape of a posterior distribution than a posterior mode, because both left and right limits of HR are bounded. For R and C, on the other hand, the posterior mode


616

PARK ET AL.

Vol. 652

Fig. 2.--Simulated range of hardness ratios Monte Carlo method (middle), and quadrature ( 95% intervals computed for each simulated set different methods exhibit similar performances

calculated using three different methods in the high-counts case (case I; see eq. [15a]), the classical method (left), right), and for the three types of hardness ratios, R (top), C (middle), and HR (bottom). The horizontal lines are the of counts, and the vertical white line in each panel represents the true value of the hardness ratio. Note that all the in this case.

appears to perform better. Thus, the posterior mode is preferable for both R and C, and the posterior mean, for HR , as shown in Table 1. 4. APPLICATIONS 4.1. Quiescent Low-Mass X-Ray Binaries Low-mass X-ray binaries ( LMXBs) are binary systems composed of a neutron star or black hole and a low-mass (<1Y2 M) donor. These binaries are generally transient sources and spend most of their time in quiescence (qLMXBs). However, even when not active they emit X-rays due to thermal emission from the surface of the neutron star, and sometimes due to an additional

hard component associated with residual low-level accretion (e.g., Campana et al. 1998; Brown et al. 1998). Identification and study of these quiescent systems is very important, since they provide a direct census of the overall LMXB population (and therefore the population of neutron stars) and also constrain their duty cycles. Although only the identification of their optical counterparts can unambiguously classify an X-ray source as a qLMXB, soft X-ray spectra or hardness ratios consistent with those expected for thermal emission by neutron stars are very useful for identifying candidate qLMXBs ( Heinke et al. 2003). We apply the methods described in x 2 on a long (40 ks) Chandra ACIS-S (Advanced CCD Imaging Spectrometer) observation of the globular cluster Terzan 5 (ObsID 3798; PI:


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME

617

Fig. 3.--Same as Fig. 2, but for a simulated range of hardness ratios in the low-counts case (case II; eq. [15b]). In this, the low-counts case, the Bayesian methods dramatically outperform the classical method.

R. Wijnands). A total of 49 sources were detected within the half-mass radius of the cluster ( for details on the observation, reduction, and analysis, see Heinke et al. 2006). We performed the source photometry in the 0.5Y2.0 keV (soft) and 2.0 Y 6.0 keV ( hard ) bands. In Figure 4 we present a color-luminosity diagram of all the detected sources (note that the color here is scaled by a factor of 2.5 to be similar to the definition of an optical color such as B À V ). The posterior modes of the color for all detected sources, computed here using the Jeffrey's prior ( ? 1 /2), along with the 95% highest posterior density intervals, are shown. Those sources that are not amenable to analysis via the classical method (weak sources with background-subtracted source counts in at least one band being <5) are marked with open circles, while the remainder are marked with filled circles.

Thermally emitting neutron stars are expected to have X-ray colors in the range ?0:5; 2:5 (indicated by dashed lines in the figure). The softer limit (dashed curve to the right; color %2) corresponds to a purely thermal emission spectrum, and the harder limit (dashed curve to the left; color %0.5) corresponds to a 20% contribution from a power law of index 1.5. From this figure it is clear that the Bayesian method, apart from providing more accurate confidence intervals for the source colors, allows us to estimate hardness ratios for the sources that were undetected in one of the two bands. We find 15 sources whose error bars overlap this region and are thus candidate qLMXBs. Note that eight of these candidates could not have been so classified using the classical method, because there are too few source counts. Conversely, many sources that lie at the edge of the


618

PARK ET AL.

Vol. 652

Fig. 4.--X-ray ``color-magnitude'' diagram for globular cluster Terzan 5. For each of the detected sources, a form of color C ? log10 (kS /kH ), scaled by a factor 2.5 for similarity with optical colors, is plotted along the abscissa, and the log10 ( luminosity ?ergs sÀ1 ) in the 0.5 Y 6 keV band, along the ordinate. The modes of the posterior probability functions (computed using quadrature) for all sources are shown, along with the 95% confidence intervals (the error bars on the luminosity are not plotted). The sources that would result in unreliable estimates if the color were computed with the classical method ( background-subtracted source counts <5 in at least one band) are marked with open circles, and the remainder are marked with filled circles. The dashed lines represent the color-luminosity tracks expected for neutron stars with blackbody spectra of different temperatures [the open triangles correspond to temperatures of log10 (T ?K) ? 6:3, 6.2, 6.1, and 6.0 from top to bottom] and varying contributions from a power-law component (slope À ? 1:5; pure blackbody for the rightmost curve, and 10% and 20% contributions from a power-law component in the 0.5 Y 6 keV band luminosity for the middle and left curves, respectively). For more details on these spectral models, see Heinke et al. (2006, in preparation). [See the electronic edition of the Journal for a color version of this figure.]

region of interest would have been mistakenly classified as candidate qLMXBs with the classical method if the overly large error bars it produces had been used (see discussion of coverage rates in Table 1). 4.2. Evolution of a Flare In the previous section (x 4.1) we demonstrated the usefulness of using the full Bayesian machinery in classifying weak sources in comparison to the classical method. Here, we go further and show that new avenues of analysis and interpretation are opened up when we take advantage of the full probabilistic description available to us. Such analysis methods are difficult to implement correctly and generally cannot be justified under the classical method. As a specific illustrative example, we consider the time evolution of a large X-ray flare on a low-mass dwarf, Ross 154 ( Wargelin et al. 2006). Stellar coronae consist of magnetically confined hot plasma (T $ 1Y10 MK ) in the outer atmospheres of late-type stars and emit optically thin thermal emission composed of bremsstrahlung continuum and atomic line emission components. Detailed analysis of these line-rich spectra yields valuable clues regarding the temperature and emission structure on the star, its composition, and the processes that drive coronal heating (see, e.g., Rosner et al. 1985). Of special importance are flares, which result from impulsive energy input into the corona by the reconnection of highly strained magnetic flux tubes and provide additional information on the dynamics of the corona. Analysis of flare

spectra during decay yields constraints on the physical site and size of the flare loops and on the heating process (see, e.g., van den Oord & Mewe 1989; Pallavicini et al. 1990; Serio et al. 1991; Schmitt & Favata 1999). A useful technique developed explicitly to analyze hydrodynamically evolving loops is to track their evolution in densitytemperature space ( Reale et al. 1993, 1997, 2004). This makes possible the modeling of flares to derive physically meaningful parameters such as the length of the loop and the heating function. Unfortunately, a comprehensive analysis along these lines requires that complex model spectra be fit to the data at each stage of the flare evolution, and because fitting requires a large number of counts (generally >500) to produce reliable results, the resolution at which the flare evolution can be studied is strongly limited. This is especially so during the rising phase of flares, where the physically most interesting processes are of short duration and the observed counts are also small. However, hardness ratios can be constructed at relatively finer time binning resolution and can be used as a proxy for the spectral fit parameters. In the following, we apply the Bayesian method described above (see x 2) to Chandra data of a stellar X-ray flare and demonstrate its usefulness. A large flare was observed on Ross 154, a nearby (2.93 pc) active M dwarf ( M4.5 Ve), during a 57 ks observation with the Chandra ACIS-S detector (ObsID 2561; PI: B. Wargelin). During the flare, the counting rate increased by over a factor of 100. Because of strong pileup in the core of the PSF ( point-spread


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME

619

Fig. 5.--Top: Light curves of Ross 154. The observed count rate in the source is shown as a function of time since the beginning of the observation, for counts in the soft (dotted histogram), medium (dashed histogram), and hard (dash-dotted histogram) bands and the total (solid histogram). A bin size of 150 s was chosen for convenience, and the sequence of bins is marked along the light curve for future reference. Bottom: Hardness ratio evolution of Ross 154. The color, C ? log10 S /H , is plotted for a bin size of t ? 150 s (solid histogram) and t ? 40 s (dotted histogram), along with the associated error bars. Larger values of C indicate a softer spectrum, and vice versa. [See the electronic edition of the Journal for a color version of this figure.]

function), only those data within an annular region surrounding the center can be used. ( Details of the reduction and analysis are given in Wargelin et al. 2006.) The light curve of the source during the flare is shown in Figure 5, along with light curves for smaller passbands (soft, 0.25Y1 keV; medium, 1Y2 keV; and hard, 2Y8 keV ). Note that the hard band light curve peaks at an earlier time than the softer bands, an impression that is borne out by the evolution of the color, C ( Fig. 5, bottom). The same data can also be represented in a color-intensity diagram, as in Figure 6, which shows scatter plots of the color and intensity at each time bin along with their computed errors (connected by a thin line to indicate the time sequence). The advantage of doing so is that color is inversely correlated with plasma temperature, and the intensity approximately tracks the plasma density.14 A hysteresis-like evolution is discernible in the color-intensity track, even at a time resolution as small as 40 s. However, the error bars on the data points are large enough to obscure this structure at small time bin sizes, whereas at larger bin sizes, the number of data bins is smaller and it becomes difficult to discern the totality of the evolutionary track, even though the pattern is persistent as bin size and starting times are varied. Mainly, the choices of the bin size and the phase (i.e., the starting point) are completely arbitrary, and the choice of no single value can be properly justified. However, Bayesian
14 X-ray luminosity LX / EM ; P(T ), where EM ? n2 V is the emission e measure of coronal plasma with an electron density ne occupying a volume V and P(T ) is the power emitted by the plasma at temperature T. The emissivity P(T ) is a weak function of T over the temperature range 6 Y 20 MK expected in this source, and assuming that the volume of emission remains constant, the observed intensity is proportional to the square of the electron density. For a more detailed modeling of the LX-T conversion, see Wargelin et al. (2006).

analysis provides a simple method to work around this difficulty, by simply considering the bin size as a nuisance parameter of the problem and marginalizing over it. Thus, in order to ameliorate the effects of choosing a single time bin size and a starting time, we have carried out a series of calculations using the Monte Carlo method: we obtain different light curves by varying the phase of the binning (i.e., the starting time), and for each light curve thus obtained, we obtain 2000 samples of (kS ; kH ) for each data point. We hold the time bin size fixed during this process. After cycling through all the phases, the samples of (kS ; kH ) are all combined to form a single image (the shaded images in Fig. 6). This procedure, also called ``cycle spinning'' ( Esch 2003), allows us to discern the pattern of the evolutionary track clearly. The longer the source spends at any part of the color-intensity diagram, the darker the shade in the images, and the statistical error in the calculation is reflected by the gradient of the shading. Furthermore, all the cycle-spun images thus constructed, for bin sizes ranging from 40 to 400 s, were then averaged to produce a single coherent image, thus effectively marginalizing over the bin sizes; such an image includes all the errors associated with the analysis ( Fig. 7). Note that this type of analysis is not possible using the classical method. For instance, in the Bayesian case the samples are drawn from the posterior probability distributions of S and H.No such sampling technique is available in the classical case,15 and naively using the Gaussian approximations as in equation (4) will overestimate the errors and wash out the signal (see x 3).

15 New counts realizations can be obtained by bootstrapping from the observed counts, but such sampling will be quantized and will render the shaded images unusable at lower count intensities.


620

PARK ET AL.

Vol. 652

Fig. 6.-- Color-intensity evolutionary tracks. Spectral hardness increases to the left, and source brightness increases toward the top. The paired color and intensity points (shown separately in Fig. 5) are plotted for bin sizes of 40 s (left) and 150 s (right). The error bars for each quantity are also plotted as horizontal and vertical bars. The points are connected by a thin solid line to aid in visualization of the temporal sequence (see also Fig. 7). Samples of (kS ; kH ) obtained from their posterior probability distributions for each bin and for various starting time phases are displayed as the shaded image underneath the evolutionary tracks. [See the electronic edition of the Journal for a color version of this figure.]

Furthermore, the concept of marginalizing over the nuisance parameter of time bin sizes has no analog in the classical system and cannot be justified, and cycle spinning leads to correlations between the estimates at different bins since the statistical independence of the data will be lost.

A sharp initial hardening of the spectrum, coincident with the onset of a small optical flare ( Wargelin et al. 2006), is visible at the beginning of the time sequence ( points 1Y3; see Figs. 5 and 6). A point worth noting is that the Bayesian analysis prevents overinterpretation of the results: at time point 3, the standard analysis indicates that the spectrum is at its hardest, suggesting either a nonthermal origin to the emission or significant transient absorption, possibly due to a coronal mass ejection, but the shaded image demonstrates that the statistical errors are sufficiently large that a more mundane low-density thermal expansion is quite plausible. This stage is followed by a rapid softening ( points 3Y8), interpreted as the thermalization of the nonthermal hard X-ray flare. Then the star, which lies in the lower central portion of the color-intensity diagram before the flare, moves to the upper left as the flare is set off (the spectrum hardens as intensity increases; points 8Y11), turns right at flare peak (softens as the deposited energy cascades to lower temperatures; points 11Y12), and then moves down the right flank back to its original state (decays in temperature and intensity; points 12Y19). The physical consequences of this analysis are discussed in detail by Wargelin et al. 5. DISCUSSION 5.1. Informative and Noninformative Prior Distributions A major component of Bayesian analysis is the specification of a prior distribution, the probability distribution assigned a priori to the parameters defining the model. This is often deemed controversial because of the apparent subjectivity in the assignment: if different people assign different priors and obtain different results, how is it possible to evaluate them? The answer

Fig. 7.--Ross 154 color-intensity track, same as Fig. 6, but for an image obtained by averaging cycle-spun images at various bin sizes ranging from 40 to 400 s. The persistence of the evolutionary track of the flare on Ross 154 is quite clear in this visualization. The points corresponding to each bin of the light curve obtained at a bin size of 150 s are superimposed on the image, with a white solid line connecting them in temporal sequence. A clear pattern of the flaring plasma evolution can be seen, first rising in temperature and intensity, softening at the peak, and then decaying in temperature and intensity. [See the electronic edition of the Journal for a color version of this figure.]


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME 5.2. R versus C versus HR

621

to this conundrum is to realize that all problems always include a bias brought to the analysis by the researcher, either in the choice of statistic, or even in the choice of analysis method, and the specification of a prior distribution codifies and makes explicit this bias. Furthermore, if indeed prior information about the value or the range of the parameter is available, the prior allows a simple and principled method by which this information can be included in the analysis. If no prior information is available, a socalled noninformative prior distribution must be chosen in order to not bias the result. In our case, if there is a strong belief as to the value or range of the hardness ratio, we can incorporate this via an informative prior distribution, encoded as a -distribution (see van Dyk et al. 2001). In addition, the analysis of previously acquired data will produce a posterior probability distribution that can be used as an informative prior on future observations of the same source. When no prior information is available, we normally use a noninformative prior distribution. Since a Poisson intensity is necessarily positive, three types of noninformative prior distributions immediately present themselves when X j $ Poisson(): 1. a noninformative prior distribution on the original scale, p() / 1; 2. a Jeffrey's noninformative prior distribution, p( ) / I
1=2

ð16aÞ

;

ð16bÞ

3. a noninformative prior distribution under a log transformation, p(log ) / 1; or equivalently p() / 1 ; ð16cÞ

where I ? E?À@ 2 log p(X j)/@2 j is the expected Fisher information (Casella & Berger 2002). The first noninformative prior distribution (eq. [16a]) is flat from 0 to 1, the second (eq. [16b]) is proportional to the square root of the Fisher information, and the third (eq. [16c]) is flat on the whole real line under a log transformation. The functional forms of these prior distributions can be generalized into p() / À1 , i.e., $ (; 0), where is an index that varies from 0 to 1: the first corresponds to ? 1, the second to ? 1/2, and the third to ? 0þ (where this notation indicates that > 0, but arbitrarily close to zero, for reasons described below). We note that these noninformative prior distributions are ``improper,'' i.e., are not integrable. If an improper prior distribution causes a posterior distribution to be improper as well, then no inferences can be made using such nonintegrable distributions. In our case, as long as is strictly positive, a posterior distribution remains ``proper,'' i.e., integrable. Hence, in the third case, we adopt values of that are strictly positive but close in value to 0, e.g., ? 0:01 or ? 0:1. Note that while these prior distributions are noninformative, in the sense that in most cases they do not affect the calculated values of the hardness ratios (see Appendix B), they do codify some specific assumptions regarding the range of values that is believed possible. For instance, if a large dynamic range is expected in the observations, a flat prior distribution in the log scale is more appropriate than a flat prior distribution in the original scale. The choice of the prior distribution is dictated by the analysis and must be reported along with the results.

At low counts, the posterior distribution of the counts ratio, R , tends to be skewed because of the Poissonian nature of data; R only takes positive values. The color, C ? log10 R , is a log transformation of R , which makes the skewed distribution more symmetric. The fractional difference hardness ratio, HR ? (1 À R )/(1 þ R ), is a monotonically decreasing transformation of R ,such that HR ! þ1as R ! 0 (i.e., a source gets harder) and HR ! À1 as R ! 1 (i.e., a source gets softer). The monotonic transformation results in a bounded range of ?À1; þ1 for HR , thereby reducing the asymmetry of the skewed posterior distribution. The values of R and HR are bounded on one side or two sides, while C is unbounded due to the log transformation. The posterior distribution of any hardness ratio becomes more symmetric as both soft and hard source intensities increase. Regardless of the size of the intensities, however, the color has the most symmetric posterior distribution among the popular definitions of a hardness ratio. Figure 8 illustrates the effect of the magnitude of the source intensities on the symmetry of the posterior distribution of each hardness ratio; the posterior distribution of C is confirmed to have the most symmetric posterior distribution. In the figure, we fix R ? 2, and the soft and hard intensities are determined by beginning with kS ? 2 and kH ? 1 [in units of counts (source area)À1] and increasing the intensities by a factor of 5 in each subsequent column. We assume no background contamination in each simulation. Figure 8 also illustrates the effect of the use of different indices for the noninformative prior distribution on the resulting posterior distribution of each hardness ratio. In the case of R , the posterior distribution becomes diffuse and more skewed toward 0 as the value of decreases; as expected, this is also the case for C. In the case of HR , the posterior distribution has more mass near the boundaries (i.e., Æ1) as the value of decreases, thereby exhibiting a U-shaped density in the case of very faint sources. As the intensities increase, however, the choice of a prior distribution has less effect on the posterior distribution of any of the hardness ratios. Finally, as pointed out in x 3, for R and C the mode of the posterior probability distribution is a robust estimator, whereas for HR the mean of the distribution is a better choice. ( Notice that the posterior mode of HR in the low count scenario is À1 when ? 0þ in Fig. 8.) 5.3. Advantages A significant improvement that our method provides over the classical way of computing hardness ratios is that we use the correct Poisson distribution throughout and do not make any limiting ( high counts) Gaussian assumptions. Thus, while the classical method works well only with high-counts data and fails to give reliable results with low counts, our method is valid in all regimes. Indeed, because the observed counts are nonnegative integers, it is not appropriate to model low counts with Gaussian distributions that are defined on all the real numbers. Because our methods are based on Poisson assumptions in a fully model-based statistical approach, we need not rely on plugin estimates of the hardness ratio. Instead, we can evaluate the full posterior probability distribution of the hardness ratio, which provides reliable estimates and correct confidence limits even when either or both soft and hard counts are very low. In particular, our method is not limited to ``detectable'' counts and can properly calculate upper and lower bounds in cases in which the source is detected in only one passband. For high counts, our method gives more precise error bars for the hardness ratios than


622

PARK ET AL.

Vol. 652

Fig. 8.--Posterior distributions of hardness ratios calculated for R (top row), C (middle row), and HR (bottom row), with different source intensities ( low at left and high at right) and prior distributions. The different curves in each figure represent the posterior probability distributions computed with different noninformative prior distribution indices, i.e., ? 0þ (solid curve), ? 1/2 (dashed curve), and ? 1 (dotted curve). At higher counts ( large Poisson intensities; right column), the posterior distributions tends to be symmetric and the effect of the prior distributions is minimal; i.e., the posterior distributions with different indices are nearly the same. At small counts (small Poisson intensities; left column), the nonsymmetric shape of the posterior distribution for each hardness ratio becomes clear, as does the effect of the choice of a noninformative prior distribution. [See the electronic edition of the Journal for a color version of this figure.]

the classical method (as defined by the Appendix A), although both methods Further, a priori information can be and can be updated by data, thereby results as we collect more data.

coverage; see Table 1 and yield similar results. embedded into our model producing more accurate

5.4. Limitations Unlike the quantile-width method ( Hong et al. 2004), the calculation of hardness ratios is limited by necessity to preselected nonoverlapping passbands. In general, we must use different band definitions in order to achieve the maximum discriminating power for soft and hard sources. A priori, with no knowledge of the character of the source populations, there is no way to ensure that the chosen passbands are the most efficient for splitting the spectrum and obtaining maximum leverage to distinguish spectral model parameters. However, this restriction allows for a more uniform analysis and comparison of results across different instruments and telescopes.

Because the Bayesian method does not allow for a simple analytic solution similar to standard error propagation in the Gaussian case, the computational methods used to determine the probability distributions become important. We have implemented a Monte Carlo integration method (the Gibbs sampler; see x A1) and a method based on numerical integration (quadrature; see x A2). The Gibbs sampler is based on Monte Carlo simulation and hence causes our estimates to have simulation errors in addition to their true variability; to reduce the levels of simulation errors, the Markov chain must be run for many iterations. On the other hand, the quadrature precisely computes the posterior distribution as long as the number of bins is large enough; however, its computation becomes expensive as the counts become large because of the binomial expansion in equation (A5). In general, the Gibbs sampler is much faster than the method based on quadrature, but care must be taken to ensure that the number of iterations is sufficient to determine the posterior mode and the required posterior interval with good precision.


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME

623

We recommend using the Gibbs sampler for high counts and the quadrature for low counts. 6. SUMMARY We have developed a method to compute hardness ratios of counts in nonoverlapping passbands that is fully Bayesian and properly takes into account 1. the Poissonian nature of the data, 2. background counts, allowing for differences in collecting areas, effective areas, and exposure times between the source and background regions, and 3. both informative and noninformative prior information. The algorithm is implemented in two forms: a Monte Carlo based Gibbs sampler that samples from the posterior probability distributions of the counts intensities and another that numerically integrates the joint probability distribution of the counts intensities using quadrature. We carry out comparison tests between our method and the classical method and show that the Bayesian method works even in the low-counts regime, where the classical method fails, and matches the results of the classical method at high counts, where a Gaussian approximation of the errors is appropriate. We apply the method to real world test cases of 1. determining candidate quiescent low-mass X-ray binaries in the Terzan 5 cluster, where we show that it can be applied even in cases where the classical method is entirely inapplicable, such as those where the source is detected in only one passband, and

2. tracking the time evolution of a flare on the dMe star Ross 154, where we demonstrate the flexibility of the method in being able to work around the conflicting limitations imposed by time bin sizes and time resolution. Additional applications of these methods include spectral classification of the X-ray sources detected in deep surveys and other galaxies, and spectral variability of AGNs (active galactic nuclei) and X-ray binaries. The physical properties of the source influence the source intensities kS and kH;given asource spectral model, the two quantities are related. Thus, variations in the hardness ratios reflect changes in the source properties (e.g., sources with different temperatures have different colors). A relation between source properties, physical model parameters, and the hardness ratio is a natural extension of this work and will be presented in the future.

We thank Yue Wu for assistance with code development and Dong-Woo Kim and Peter Edmonds for useful discussions. The authors gratefully acknowledge funding for this project partially provided by NSF grants DMS-01-04129, DMS-04-38240, and DMS-04-06085 and by NASA Contracts NAS8-39073 and NAS8-03060 (CXC ), and NASA grant NAG5-13056. This work is a product of joint work with the California-Harvard astrostatistics collaboration (CHASC ) whose members are T. P., V. K., A. S., D. v. D., and A. Z. and J. Chiang, A. Connors, D. Esch, P. Freeman, H. Kang, X.-L. Meng, J. Scargle, E. Sourlas, and Y. Yu.

APPENDIX A COMPUTING THE HARDNESS RATIOS A1. MONTE CARLO INTEGRATION: GIBBS SAMPLER Monte Carlo methods can be used in Bayesian statistical inference if representative values of the model parameters can be simulated from the posterior distribution. Given a sufficient simulation size, these simulated values can be used to summarize and describe the posterior distribution and hence the Bayesian statistical inference. For example, the posterior mean can be computed by averaging these simulated values, and posterior intervals are computed using quantiles of the simulated values. Markov chain Monte Carlo ( MCMC ) methods ( Tanner & Wong 1987) accomplish Monte Carlo simulation by constructing a Markov chain with a stationary distribution equal to the target posterior distribution. The Markov chain is then run and upon convergence delivers the desired Monte Carlo simulation of the posterior distribution. The Gibbs sampler (Geman & Geman 1984) is a special case of MCMC that constructs the Markov chain through alternating conditional simulation of a subset of the model parameters conditioning on the others. A more detailed description of these methods can be found in van Dyk (2003) and the Appendix of van Dyk et al. (2001). In order to construct a Gibbs sampler, we embed the statistical model described in x 2.1 into a larger model. Since the observed counts are the sum of the source counts () and the background counts ( ), we write S ? S þ S and H ? H þ H . That is, the source counts are modeled as independent Poisson random variables, S $ Poisson(eS kS ); H $ Poisson(eH kH ); ðA1Þ

and the background counts in the source exposure area, as independent Poisson random variables, S $ Poisson(eS S ); H $ Poisson(eH H ); ðA2Þ

where k and denote the expected source and background counts in the source region. This implies equation (5), where the detected photons in the source region are a convolution of the source and background counts, i.e., S ? S þ S and H ? H þ H . Using the method of data augmentation, we treat the background counts in the source exposure area ( S and H) as missing data; the source counts (S and H) are fully determined when S and H are known. Intuitively, it is straightforward to estimate the Poisson intensities if detected photons are split into the source and background counts. Based on this setting, the Gibbs sampler produces Monte Carlo draws of kS and kH, as well as stochastically imputing the missing data ( S and H). The conditional independence of kS and kH implies that a Monte Carlo simulation of each hardness ratio can be obtained by simply transforming that of kS and kH. Due to the


624

PARK ET AL.

Vol. 652

conditional independence, both soft and hard bands also have exactly the same sampling steps, and thus we illustrate the Gibbs sampler only for the soft band. First, the joint posterior distribution of kS, S, and S is given by p(kS ;S ; S jS ; BS ) / p(S jkS ; S )p(BS jS )p( S jS )p(kS )p(S ) 1 kS À S þ S1 À1 Bþ S þ S3 À1 e / (S À S )! S !

À( eS þ

S2

)kS À(eS þreS þ

S4

)

S

:

ðA3Þ

That is, conditional on the total soft counts (S ), the unobserved background counts in the source exposure area ( S) follow a binomial ( distribution: given the current iterates of the parameters, k(t) and St) , step 1 is given by S 1. draw
(t þ1) S ( ( ( from S j(k(t) ;St) ; S ; BS ) $ BinomialðS ;St) /ðk(t) þ St) ÞÞ, S S

for t ? 1;: : : ; T . Next, steps 2 and 3 are to draw the source and background intensities from the -distributions. In particular, steps 2 and 3 find the next iterates of the intensities using
(t þ1) (t ) (t þ1) (t þ1) þ S1 ; eS þ S2 ), and from kS j(S ; S ; S ; BS ) $ Gamma(S À S 2. draw kS ( ( ( 3. draw Stþ1) from S j(k(tþ1) ; Stþ1) ; S ; BS ) $ Gamma(BS þ Stþ1) þ S3 ; eS þ reS þ S4 ) S for t ? 1;: : : ; T . To accomplish one Gibbs iteration, these steps are implemented for the soft and hard bands. After iterating the Gibbs sampler T times, we collect a posterior sample fk(t) ; k(t) ; t ? t0 þ 1; : : : ; T g for a sufficiently long burn-in period16 t0. The H S analytical calculation to determine burn-in is far from computationally feasible in most situations. However, visual inspection of plots of the Monte Carlo output is commonly used for determining burn-in. More formal tools for determining t0, called convergence diagnostics, have been proposed; for a recent review, see Cowles & Carlin (1996). Under a monotone transformation (see eqs. [12], [13], and [14]) of the posterior samples, T À t0 Monte Carlo draws for each hardness ratio are obtained, which enables us to find its point estimates and the corresponding error bar (see x 2.2.1 for details).

A2. NUMERICAL INTEGRATION: GAUSSIAN QUADRATURE Instead of introducing the missing data, the Bayes's theorem can analytically compute the joint posterior distribution of kS and kH, based on which we obtain the posterior distribution of each hardness ratio through quadrature. Because the models for the hard and soft bands are independent and symmetric, we illustrate the computation only for the soft intensity kS. First, the joint posterior distribution of kS and S in equation (8) is written out as p(kS ;S jS ; BS ) ? R ?R
1 R1 0 0

p(kS )p(S )p(S jkS ;S )p(BS jS ) p(kS )p(S )p(S jkS ;S )p(BS jS )d S d k ( kS þ ( kS þ
À1 BS þ S ) kS S S S1 À1 BS þ S ) kS S S
S1 S3 S3

S
S S

À1 À( eS þ

11 0 0

R

e

S2 S2

)k

e

À(eS þreS þ À(eS þreS þ

S4 S4

)

S

À1 À( eS þ

e

)k

e

)

:
S

ðA4Þ

d S d k

S

Then, the binomial expansion to (kS þ S )S , i.e., (kS þ S )S ?
S X j ?0

À(S þ 1) k j S Àj ; À( j þ 1)À(S À j þ 1) S S

ðA5Þ

enables us to analytically integrate S out of the joint distribution in equation (A4) and obtain an analytical solution to the marginal posterior distribution of kS. Therefore, using the binomial expansion, equation (A4) becomes " # À1 S X À(S þ 1) ÀðS À j þ BS þ S3 Þ À( j þ S1 ) p(kS ;S jS ; BS ) ? À( j þ 1)À(S À j þ 1) (eS þ reS þ S4 ) S ÀjþBS þ S3 (eS þ S2 ) jþ S1 j?0 ;
S X j?0

À(S þ 1) k À( j þ 1)À(S À j þ 1)

jþ S

S1

À1 S ÀjþBS þ S

S3

À1 À(eS þ

e

S2

)kS À(eS þr ÁeS þ

e

S4

)S

ðA6Þ

and then S is integrated out of equation (A6), i.e., " S X 1 À( S À j þ B S þ p(kS jS ; BS ) ? À( j þ 1)À(S À j þ 1) (eS þ reS þ S4 )S À j ?0 ;
S X j ?0

S3 ) jþBS þ S3 ) jþBS þ

S3

À( j þ S 1 ) (eS þ S2 ) jþ k
jþ S
S1

#
S1

À1

1 À(S À j þ BS þ À( j þ 1)À(S À j þ 1) (eS þ reS þ S4 ) S À

À1 À(eS þ

S3

e

S2

)kS

ðA7Þ

16

The term ``burn-in'' refers to the number of iterations necessary for the Markov chain to converge and begin to sample from the posterior probability distribution.


No. 1, 2006

HARDNESS RATIOS IN POISSON REGIME

625

TABLE 2 Cove rage of the X-Ray Colo r (C) Using th e Bayesi an Method with Dif ferent Noninformative Prior Distribution Indices kH [counts (source area)À1] 0.5 kS [counts (source area)À1] 0.5................................. CRa (%) 100 100 100 100 100 100 100 99.9 99.1 100 100 98.4 100 99.9 97.7 100 99.9 97.5 100 99.9 98.2 100 99.9 98.5 CRa (%) 100 100 100 100 100 100 100 99.9 99.5 100 100 99.2 100 100 98.7 100 99.8 98.0 100 99.6 97.4 100 99.6 97.4 1.0 CRa (%) 2.0 CRa (%) 4.0 CRa (%) 8.0 16.0 CRa (%) CRa (%) 100 99.7 97.0 100 99.7 97.7 99.9 99.0 97.3 97.0 97.9 97.1 97.8 97.2 96.3 96.7 95.6 95.3 97.1 97.2 97.2 94.6 94.8 94.9 32.0 CRa (%) 100 100 98.4 100 99.8 98.0 99.8 99.4 97.7 96.8 96.6 97.4 97.4 97.4 97.1 95.5 94.4 93.7 93.8 94.3 94.5 96.5 96.5 96.5 64.0

0+ 1/2 1 0+ 1/2 1 0+ 1/2 1 0+ 1/2 1 0+ 1/2 1 0+ 1/2 1 0+ 1/2 1 0+ 1/2 1

ALb 23.99 5.10 2.87 22.95 4.91 2.73 21.42 4.52 2.51 18.60 3.92 2.24 13.66 3.29 1.99 11.37 3.10 1.91 10.82 2.98 1.80 10.99 3.02 1.82

ALb

ALb

ALb

ALb

ALb

AL

b

AL

b

1.0.................................

2.0.................................

4.0.................................

8.0.................................

16.0...............................

32.0...............................

64.0...............................

22.67 100 4.88 99.9 2.71 99.0 21.59 100 4.69 99.7 2.58 99.2 19.57 99.8 4.30 99.7 2.37 99.7 16.80 99.9 3.68 99.6 2.09 99.0 12.28 99.3 3.04 99.7 1.85 98.7 9.96 99.8 2.79 99.2 1.73 98.2 9.56 99.3 2.71 99.0 1.65 96.0 9.52 99.7 2.72 99.0 1.66 97.1

21.19 100 4.51 99.9 2.50 98.4 19.61 100 4.31 100 2.37 98.9 17.52 99.7 3.89 99.6 2.14 98.8 15.18 99.2 3.30 99.2 1.87 99.1 10.93 98.3 2.64 97.8 1.63 97.5 8.49 97.7 2.34 97.8 1.49 97.2 8.00 97.7 2.21 97.8 1.40 97.1 7.84 97.7 2.23 98.1 1.40 97.8

19.02 100 14.07 100 11.32 3.95 100 3.35 99.9 3.09 2.25 98.4 2.02 97.2 1.90 17.40 100 12.24 100 10.03 3.77 99.9 3.04 99.8 2.80 2.14 98.5 1.85 97.3 1.73 15.04 99.6 10.54 99.3 8.45 3.27 99.5 2.56 99.2 2.33 1.87 97.8 1.60 98.1 1.49 13.11 97.9 8.87 97.9 6.06 2.64 98.5 1.89 98.0 1.60 1.59 97.7 1.29 97.3 1.14 8.74 99.2 5.13 98.2 2.50 1.86 98.8 1.25 97.6 0.96 1.28 98.1 1.01 97.3 0.83 6.11 98.6 2.41 97.9 0.73 1.60 98.0 0.95 96.6 0.65 1.14 96.7 0.83 96.6 0.63 5.74 97.3 2.33 95.7 0.59 1.51 96.5 0.88 94.4 0.55 1.06 96.0 0.75 94.5 0.54 5.69 97.7 2.11 96.3 0.53 1.52 97.7 0.83 94.8 0.50 1.05 96.6 0.71 94.2 0.49

11.14 3.02 1.82 9.84 2.74 1.67 7.95 2.21 1.40 5.85 1.53 1.08 2.26 0.86 0.75 0.60 0.55 0.54 0.43 0.43 0.42 0.36 0.36 0.36

11.11 3.03 1.83 9.51 2.71 1.65 7.82 2.21 1.39 5.63 1.49 1.04 2.14 0.83 0.71 0.54 0.50 0.49 0.37 0.36 0.36 0.29 0.29 0.28

a b

Coverage rate (CR) of the corresponding interval. Average length (AL) of the corresponding interval.

Here the prior independence of kS and kH implies that the joint posterior distribution of these two intensities is decomposed into a product of their marginal posterior distributions, i.e., p(kS ; kH jS ; H ; BS ; BH ) ? p(kS jS ; BS )p(kH jH ; BH ): ðA8Þ

This assumption of independence between kS and kH can be relaxed, and hierarchical modeling for sources to determine clustering properties and spectral parameters can be devised (in preparation). Using the joint posterior distribution in equation (A8), we compute the analytical solution to the posterior distribution of each hardness ratio as follows: 1. the posterior distribution of R is obtained after integrating kH out of @ ( kS ; kH p(R ; kH jS ; H ; BS ; BH ) d R d kH ? p(kS ; kH jS ; H ; BS ; BH ) @ (R ; k H d kS d k

) )

H

? p(R kH ; kH jS ; H ; BS ; BH )kH d R d kH ; 2. the posterior distribution of C is obtained after integrating kH out of @ (kS ; kH ) p(C; kH jS ; H ; BS ; BH ) d C d kH ? p(kS ; kH jS ; H ; BS ; BH ) @ (C; k ) H d kS d k

H

? p(10C kH ; kH jS ; H ; BS ; BH )10C ln (10)kH d C d kH ;
TABLE 3 Legend Key f or Table 2 Hard band source intensity, kH [counts (source area)À1] Soft band source intensity kS [counts (source area)À1] Coverage rate of intervals with ? 0þ Coverage rate of intervals with ? 1/2 Coverage rate of intervals with ? 1 Average length of intervals with ? 0þ Average length of intervals with ? 1/2 Average length of intervals with ? 1


Fig. 9.-- Graphical representation of the data in Table 2. The symbols are located on a grid corresponding to the appropriate kS (abscissa) and kH (ordinate), for ? 0þ (top), ? 1/2 (middle), and ? 1 (bottom). The shading of each symbol represents the coverage, with 100% being lightest and the shading progressively getting darker as the coverage percentage decreases. The sizes of the symbols are scaled as ( length of interval)1/2. Note that for small values of , which correspond to a prior expectation of a large dynamic range in the source colors, the intervals are large when the counts are small (i.e., the choice of prior distribution has a large effect) and decrease to values similar to those at larger when there are more counts. [See the electronic edition of the Journal for a color version of this figure.]


HARDNESS RATIOS IN POISSON REGIME

627

Fig. 10.--Empirical distributions of coverage rates with different prior distribution indices. The first three columns of panels are constructed under different types of noninformative prior distributions, ? 0þ ( first column), ? 1/2 (second column), and ? 1 (third column). The fourth column depicts results from the classical method and is presented for reference. ( In calculating the coverage and the length of the intervals, we exclude simulations in which S ? 0 or H ? 0, since C is undefined in such cases.) The top panels correspond to the low-count scenarios kS ? (0:5; 1; 2; 4) or kH ? (0:5; 1; 2; 4), and the bottom panels correspond to the remaining cases. The coverage rate is the fraction of simulated intervals that contain the parameter under which the data were simulated. In this case, we compute the coverage rate based on the 95% posterior intervals for C with different combinations of values for kS and kH. The histograms represent the empirical distribution of the coverage rate. The dotted vertical lines represent the expected coverage rate of 95%.

3. the posterior distribution of HR is obtained after integrating ! ? kS þ kH out of @ ( kS ; kH ) d kS d kH p(HR ;!jS ; H ; BS ; BH ) d HR d ! ? p(kS ; kH jS ; H ; BS ; BH ) @ (HR ;!) (1 À HR )! (1 þ HR )! ! ; d HR d !: ?p S ; H ; BS ; BH 2 2 2 To numerically integrate out a parameter in these analytical solutions, we employ quadrature, which precisely evaluates the integral of a real-valued function (and has nothing to do with the Gaussian counts statistic assumptions); refer to Wichura (1989) for details of the algorithm. Due to the complex expression, the probability density for each hardness ratio is approximated for each equally spaced abscissa over the finite range of the hardness ratio. Then we summarize the approximate posterior density in order to calculate the inferences, as detailed in x 2.2.1. APPENDIX B EFFECT OF NONINFORMATIVE PRIORS In order to compare the effect of these noninformative prior distributions, we have carried out simulations for low counts and high counts cases to test the computed coverage against the simulation: Table 2 presents the coverage rate and mean length of the 95% intervals for the color, C. In order to simplify the comparison, we assume that there is no background contamination and simulate 1000 realizations of source counts for each case. The tables are laid out as a grid of soft and hard source counts per source area (i.e., kS vs. kH; see eq. [A1]). For each (kS ; kH ) pair, we report the percentage of the simulations that result in 95% posterior intervals that actually contain (kS ; kH ) along with the mean length of the 95% posterior intervals from the simulations. This calculation is carried out for three choices of the prior index, ? 0þ , 1/2, and 1, corresponding to the top, middle, and bottom elements in each section of the table. In these simulations, we theoretically expect that the computed 95% posterior intervals contain the actual value with a probability of 95%. Due to the Monte Carlo simulation errors, we expect most coverage rates to be between 0.93 and 0.97, which are three standard deviations away from = 0.95; a standard deviation of the coverage probability for 95% posterior intervals is given by ?(0:95)(0:05)/10001 2 ? 0:0069 under a binomial model for the Monte Carlo simulation.


628

PARK ET AL.

Table 2 presents the coverage rate and average length of posterior intervals for small and large magnitudes of source intensities. The key is given in Table 3. For each (kS ; kH ) pair, the posterior intervals are simulated using different prior distribution indices ( ? 0þ , 1/2, and 1), and the summary statistics--the coverage rate and the mean lengths of the intervals--are displayed from top ( ? 0þ ) to bottom ( ? 1) within each section. The same information is shown in graphical form in Figure 9. The use of ? 0þ tends to yield very wide posterior intervals and undercover the true X-ray color when the source intensities are low. On the other hand, the other two noninformative prior distribution indices produce much shorter posterior intervals and maintain high coverage rates. With high counts, however, the choice of a noninformative prior distribution does not have a noticeable effect on the statistical properties of the posterior intervals. The same results are summarized in graphical form in Figure 10, where the 95% coverage rates are shown for various cases. The empirical distributions of the coverage rate computed under either low-count (kS ? 0:5, 1, 2, or 4 or kH ? 0:5, 1, 2, or 4) or highcount (all of the remaining) scenarios are shown, along with a comparison with the results from the classical method (last column). The distinction between the Bayesian and classical methods is very clear in this figure. Note that the posterior intervals overcover relative to our theoretical expectation, but to a lesser degree than the classical intervals. Overcoverage is conservative and preferred to undercoverage, but should be minimal when possible. Considering the low-count and high-count scenarios, Figure 10 suggests using the Jeffrey's noninformative prior distribution (i.e., ? 1/2) in general. At low counts, the shape (and consequently the coverage) of the posterior distribution is dominated by our prior assumptions, as encoded in the parameter . The coverage rate will also be improved when informative priors are used.
REFERENCES Alexander, D. M., Brandt, W. N., Hornschemeier, A. E., Garmire, G. P., Kashyap, V. L., Drake, J. J., Gudel, M., & Audard, M. 2002, ApJ, 580, 1118 Å Schneider, D. P., Bauer, F. E., & Griffiths, R. E. 2001, AJ, 122, 2156 Kim, D.-W., Fabbiano, G., & Trinchieri, G. 1992, ApJ, 393, 134 Babu, G. J., & Feigelson, E. D. 1997, Statistical Challenges in Modern AsKim, D.-W., et al. 2006, ApJ, 644, 829 tronomy II ( Berlin: Springer) Pallavicini, R., Tagliaferri, G., & Stella, L. 1990, A&A, 228, 403 Bradt, H., et al. 1976, ApJ, 204, L67 Park, T., et al. 2006, ApJ, submitted Brandt, W. N., & Hasinger, G. 2005, ARA&A, 43, 827 Protassov, R., van Dyk, D. A., Connors, A., Kashyap, V. L., & Siemiginowska, A. Brandt, W. N., et al. 2001a, AJ, 122, 1 2002, ApJ, 571, 545 ------. 2001b, AJ, 122, 2810 Reale, F., Betta, R., Peres, G., Serio, S., & McTiernan, J. 1997, A&A, 325, 782 Brown, E. F., Bildsten, L., & Rutledge, R. E. 1998, ApJ, 504, L95 Reale, F., Gudel, M., Peres, G., & Audard, M. 2004, A&A, 416, 733 Å Campana, S., Colpi, M., Mereghetti, S., Stella, L., & Tavani, M. 1998, A&A Reale, F., Serio, S., & Peres, G. 1993, A&A, 272, 486 Rev., 8, 279 Rosner, R., Golub, L., & Vaiana, G. S. 1985, ARA&A, 23, 413 Casella, G., & Berger, R. L. 2002, Statistical Inference (2nd ed.; Pacific Grove: Schmitt, J. H. M. M., & Favata, F. 1999, Nature, 401, 44 Duxbury) Serio, S., Reale, F., Jakimiec, J., Sylwester, B., & Sylwester, J. 1991, A&A, Cowles, M. K., & Carlin, B. P. 1996, J. Am. Stat. Assoc., 91, 883 241, 197 Esch, D. N. 2003, Ph.D. thesis, Harvard Univ. Silverman, J. D., et al. 2005, ApJ, 618, 123 Feigelson, E. D., & Babu, G. J. 1992, Statistical Challenges in Modern AsTanner, M. A., & Wong, W. H. 1987, J. Am. Stat. Assoc., 82, 528 tronomy ( Berlin: Springer) Tuohy, I. R., Garmire, G. P., Lamb, F. K., & Mason, K. O. 1978, ApJ, 226, L17 ------. 2003, Statistical Challenges in Modern Astronomy III ( Berlin: Springer) Vaiana, G. S., et al. 1981, ApJ, 245, 163 Gehrels, N. 1986, ApJ, 303, 336 van den Oord, G. H. J., & Mewe, R. 1989, A&A, 213, 245 Geman, S., & Geman, D. 1984, IEEE Trans. Pattern Anal. Machine Intellivan Dyk, D. A. 2003, in Statistical Challenges in Modern Astronomy III, ed. gence, 6, 721 E. D. Feigelson & G. J. Babu ( Berlin: Springer), 41 Giacconi, R., et al. 2001, ApJ, 551, 624 van Dyk, D. A., Connors, A., Kashyap, V. L., & Siemiginowska, A. 2001, ApJ, Gregory, P. C., & Loredo, T. J. 1992, ApJ, 398, 146 548, 224 Hasinger, G., & van der Klis, M. 1989, A&A, 225, 79 van Dyk, D. A., & Hans, C. M. 2002, in Spatial Cluster Modelling, ed. Heinke, C. O., Grindlay, J. E., Edmonds, P. D., Lloyd, D. A., Murray, S. S., D. Denison & A. Lawson ( London: CRC ), 175 Cohn, H. N., & Lugger, P. M. 2003, ApJ, 598, 501 van Dyk, D. A., & Kang, H. 2004, Stat. Sci., 19, 275 Heinke, C. O., Wijnands, R., Cohn, H. N., Lugger, P. M., Grindlay, J. E., Wargelin, B. J., Kashyap, V. L., Drake, J. J., Garcia-Alvarez, D., & Ratzlaff, ? Pooley, D., & Lewin, W. H. G. 2006, ApJ, submitted (astro-ph /0606253) P. W. 2006, ApJ, submitted Hong, J. S., Schlegel, E. M., & Grindlay, J. E. 2004, ApJ, 614, 508 Weisskopf, M. C., Tananbaum, H. D., Van Speybroeck, L. P., & O'Dell, S. L. Gillessen, S., & Harney, H. L. 2005, A&A, 430, 355 2000, Proc. SPIE, 4012, 2 Jansen, F., et al. 2001, A&A, 365, L1 Wichura, M. J. 1989, Dept. Stat. Tech. Rep. 257 (Chicago: Univ. Chicago) Kashyap, V. L., & Drake, J. J. 1998, ApJ, 503, 450 Zamorani, G., et al. 1981, ApJ, 245, 357