In several cases, analyzing simulated data is a powerful tool to demonstrate feasibility. For example:
Here, we will illustrate the first example. The first step is to define a model on which to base the simulation. The way XSPEC creates simulated data is to take the current model, convolve it with the current response matrix, while adding noise appropriate to the integration time specified. Once created, the simulated data can be analyzed in the same way as real data to derive confidence limits.
Let us suppose that we want to measure the intrinsic absorption of a faint high-redshift source using Chandra. Our model is thus a power-law absorbed both by the local Galactic column and an intrinsic column. First, we set up the model. From the literature we have the Galactic absorption column and redshift so:
XSPEC12>mo pha*zpha(zpo) Input parameter value, delta, min, bot, top, and max values for ... 1 0.001( 0.01) 0 0 100000 1e+06 1:phabs:nH>0.08 1 0.001( 0.01) 0 0 100000 1e+06 2:zphabs:nH>1.0 0 -0.01( 0.01) -0.999 -0.999 10 10 3:zphabs:Redshift>5.1 1 0.01( 0.01) -3 -2 9 10 4:zpowerlw:PhoIndex>1.7 0 -0.01( 0.01) -0.999 -0.999 10 10 5:zpowerlw:Redshift>5.1 1 0.01( 0.01) 0 0 1e+24 1e+24 6:zpowerlw:norm> ======================================================================== Model phabs<1>*zphabs<2>*zpowerlw<3> Source No.: 1 Active/Off Model Model Component Parameter Unit Value par comp 1 1 phabs nH 10^22 8.00000E-02 +/- 0.0 2 2 zphabs nH 10^22 1.00000 +/- 0.0 3 2 zphabs Redshift 5.10000 frozen 4 3 zpowerlw PhoIndex 1.70000 +/- 0.0 5 3 zpowerlw Redshift 5.10000 frozen 6 3 zpowerlw norm 1.00000 +/- 0.0
Now suppose that we know that the observed 0.5-2.5 keV flux is ergs/cm/s. We now can derive the correct normalization by using the commands energies, flux and newpar. That is, we'll determine the flux of the model with the normalization of unity. We then work out the new normalization and reset it:
XSPEC12>energies 0.5 2.5 1000 XSPEC12>flux 0.5 2.5 Model Flux 0.052736 photons (1.0017e-10 ergs/cm^2/s) range (0.50000 - 2.5000 keV) XSPEC12> newpar 6 1.1e-3 3 variable fit parameters XSPEC12>flux Model Flux 2.6368e-05 photons (5.0086e-14 ergs/cm^2/s) range (0.50000 - 2.5000 keV)
Here, we have changed the value of the normalization (the sixth parameter) from 1.0 to / = to give the required flux.
The simulation is initiated with the command fakeit. If the argument none is given, we will be prompted for the name of the response matrix. If no argument is given, the current response will be used. We also need to reset the energies command before the fakeit to ensure that the model is calculated on the entire energy range of the response:
XSPEC12>energies reset XSPEC12>fakeit none For fake data, file #1 needs response file: aciss_aimpt_cy15.rmf ... and ancillary response file: aciss_aimpt_cy15.arf
There then follows a series of prompts asking the user to specify whether he or she wants counting statistics (yes!), the name of the fake data (file test.fak in our example), and the integration time (40,000 seconds - the correction norm and background exposure time can be left at their default values).
Use counting statistics in creating fake data? (y): Input optional fake file prefix: Fake data file name (aciss_aimpt_cy15.fak): test.fak Exposure time, correction norm, bkg exposure time (1.00000, 1.00000, 1.00000): 40000.0 No background will be applied to fake spectrum #1 1 spectrum in use Fit statistic : Chi-Squared 350.95 using 1024 bins. ***Warning: Chi-square may not be valid due to bins with zero variance in spectrum number(s): 1 Test statistic : Chi-Squared 350.95 using 1024 bins. Null hypothesis probability of 1.00e+00 with 1020 degrees of freedom ***Warning: Chi-square may not be valid due to bins with zero variance in spectrum number(s): 1 Current data and model not fit yet.
The first thing we should note is that the default statistics are not correct for these data. For Poisson data and no background we should cstat for the fit statistic and pchi for the test statistic:
XSPEC12>statistic cstat Default fit statistic is set to: C-Statistic This will apply to all current and newly loaded spectra. Fit statistic : C-Statistic 513.63 using 1024 bins. Test statistic : Chi-Squared 350.95 using 1024 bins. Null hypothesis probability of 1.00e+00 with 1020 degrees of freedom ***Warning: Chi-square may not be valid due to bins with zero variance in spectrum number(s): 1 Current data and model not fit yet. XSPEC12>statistic test pchi Default test statistic is set to: Pearson Chi-Squared This will apply to all current and newly loaded spectra. Fit statistic : C-Statistic 513.63 using 1024 bins. Test statistic : Pearson Chi-Squared 639.35 using 1024 bins. Null hypothesis probability of 1.00e+00 with 1020 degrees of freedom ***Warning: Pearson Chi-square may not be valid due to bins with zero model value in spectrum number(s): 1 Current data and model not fit yet.
As we can see from the warning message we need to ignore some channels where there is no effective response. Looking at a plot of the data and model indicates we should ignore below 0.15 keV and above 10 keV so:
XSPEC12>ignore **-0.15 10.0-** 11 channels (1-11) ignored in spectrum # 1 340 channels (685-1024) ignored in spectrum # 1 Fit statistic : C-Statistic 510.55 using 673 bins. Test statistic : Pearson Chi-Squared 635.19 using 673 bins. Null hypothesis probability of 8.22e-01 with 1020 degrees of freedom Current data and model not fit yet.
We assume that the Galactic column is known so freeze the first parameter. We then perform a fit followed by the error command:
XSPEC12> freeze 1 ... XSPEC12>fit ... XSPEC12>parallel error 3 XSPEC12>err 2 4 6 Parameter Confidence Range ( 2.706) 2 1.16302 5.64805 (-2.00255,2.48247) 4 1.73345 1.95111 (-0.106137,0.111521) 6 0.00126229 0.00221906 (-0.000397759,0.000559019)
Note that our input parameters do not lie within the 90% confidence errors however since this will happen one times in ten (by definition) this should not worry us unduly. For a real observing proposal we would likely repeat this experiment with different input values of the intrinsic absorption to learn how well we could constrain it given a range of possible actual values.