Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://hea-www.harvard.edu/AstroStat/Demo/pyBLoCXS/IACHEC2014/
Дата изменения: Unknown Дата индексирования: Sun Apr 10 17:54:23 2016 Кодировка: Поисковые слова: earth's atmosphere |
This is a walkthrough of a demo on pyBLoCXS and calibration uncertainties conducted at IACHEC 2014 on May 12 by Vinay Kashyap (CXC/CHASC/CfA). The demo complements the talk on strategies to deal with systematic calibration errors from earlier in the day.
This document is split into multiple parts: the first gives details on the software and data that were used in the demo, and describes ways to get and install the necessary components; the second shows how to run the MCMC-based Bayesian spectral fitting algorithm (known as pyBLoCXS; van Dyk et al. 2001) in Sherpa; the third describes how to generate random sample curves of effective areas representative of known systematic errors using arfmunge; the fourth shows how to compress the resulting sample into a format that is flexible and portable, and which is currently implemented in Sherpa-pyBLoCXS; and the fifth shows how to carry out spectral analysis that includes calibration uncertainties.
To follow through all the steps in this demo, the following are needed:
We will first demonstrate standard analysis. We will run a Sherpa fit, and then a Bayesian MCMC fit, and compare the results.
Unzip and untar the data:
covariance 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- abs1.nH 0.103681 -0.00569035 0.00569035 p1.PhoIndex 1.24369 -0.0217466 0.0217466 p1.norm 0.000581215 -1.26871e-05 1.26871e-05
confidence 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- abs1.nH 0.103681 -0.00569035 0.00569035 p1.PhoIndex 1.24369 -0.0217466 0.0217466 p1.norm 0.000581215 -1.2588e-05 1.28854e-05
Now, do a pyBLoCXS run. pyBLoCXS requires that a standard fit() and a covar() be run beforehand, so we can continue from where we stopped above. To run pyBLoCXS, we have to choose a sampler. There are four recognized currently in Sherpa, of which three are implemented.
['metropolismh', 'fullbayes', 'mh', 'pragbayes']
array([ 1.03680945e-01, 1.24369149e+00, 5.81215353e-04])
[0.0054630064588717586, 0.021625075347852889, 1.2220452692220281e-05]
pyBLoCXS takes the current values of its starting point, so different chains with different starting points can be run. Try setting, e.g.,
The calibration library is a set of calibration products generated as random samples based on an uncertainty model of the instrument. Jeremy Drake and Pete Ratzlaff have devised a scheme where various subsystem uncertainties are put together to generate a representative sample of possible calibration products (see Drake et al. 2006) This scheme is implemented in a package called MCCal, which generates ARFs and RMFs and fits a specified spectral model to each realization. Please contact them if you wish to try it out. Here, we will consider only Chandra/ACIS-S effective areas, and use the program arfgen (part of MCCal) to generate 500 realizations of the ARF for the Q1127-145 dataset considered above.
arfgen may be run in general as follows, assuming that (a) the perl modules PDL and FITSIOis installed in /home/username/perl5/lib/perl5/, (b) the FITSIO module
These ARFs can be read in to IDL
The ARF calibration library computed above needs to be ingested into pyBLoCXS in a flexible manner. To do this, we have devised a file format called AREF (Ancillary Response Error Format; Kashyap et al.\ 2008) that is a superset of the OGOP ARF format, and is designed to allow for a large variety of uncertainties to be encoded.
These ARFs can also be consolidated and stored in a single file and used within pyBLoCXS. One of the AREF formats is a method called SIM1DADD. To construct it, use the IDL script samp2fits.pro, which produces the fits file 866_samp_aref.fits.
It is true that disk space is cheap, so there is the option of carrying around full sample libraries for each dataset.
However, because such a strategy can quickly become unwieldy, and Principal Components compression provides other advantages (see Xu et al. 2014).
So here we will show how to make an AREF file that utilizes the PCA1DADD method.
We will first use an R script run_pca_on_arflib.r, which is a wrapper to the R program avgarf_866.txt
pcomps_866.txt
sqlamb_866.txt
sqlambf='sqlamb_866.txt'
pcompf='pcomps_866.txt'
avgarff='avgarf_866.txt'
defarff='obs866/acis.arf'
areffil='866_pca_aref.fits'
vthresh=0.99
verbose=1
pca2aref, sqlambf, pcompf, avgarff, defarff, areffil, vthresh=vthresh, ncomp=ncomp, verbose=verbose
The SAMP AREF file has these extensions --
Block Name Type Dimensions -------------------------------------------------------------------------------- Block 1: PRIMARY Null Block 2: SPECRESP Table 6 cols x 1078 rows Block 3: SIMCOMP Table 2 cols x 500 rows
Block Name Type Dimensions -------------------------------------------------------------------------------- Block 1: PRIMARY Null Block 2: SPECRESP Table 6 cols x 1078 rows Block 3: PCACOMP Table 4 cols x 13 rows
Here, we will demonstrate how to use the AREF files that contain calibration uncertainty information with pyBLoCXS. First, we have to set the sampler,
{'defaultprior': True, 'inv': False, 'log': False, 'nsubiters': 10, 'originalscale': True, 'p_M': 0.5, 'priors': (), 'priorshape': False, 'scale': 1, 'sigma_m': False, 'simarf': None}