XSPEC has the very useful facility of allowing models to be fitted simultaneously to more than one data file. It is even possible to group files together and to fit different models simultaneously. Reasons for fitting in this manner include:

*The same target is observed at several epochs but, although the source stays constant, the response matrix has changed.*When this happens, the data files cannot be added together; they have to be fitted separately. Fitting the data files simultaneously yields tighter constraints.*The same target is observed with different instruments.*The GIS and SIS on*ASCA*, for example, observe in the same direction simultaneously. As far as XSPEC is concerned, this is just like the previous case: two data files with two responses fitted simultaneously with the same model.*Different targets are observed, but the user wants to fit the same model to each data file with some parameters shared and some allowed to vary separately.*For example, if you have a series of spectra from a variable AGN, you might want to fit them simultaneously with a model that has the same, common photon index but separately vary the normalization and absorption.

Other scenarios are possible--the important thing is to recognize the flexibility of XSPEC in this regard.

As an example of the first case, we'll fit two spectra derived from two
separate *Einstein* Solid State Spectrometer (SSS) observations of
the cooling-flow cluster Abell 496. Although the two observations were
carried out on consecutive days (in August 1979), the response is
different, due to the variable build-up of ice on the detector. This
problem bedeviled analysis during the mission; however, it has now been
calibrated successfully and is incorporated into the response matrices
associated with the spectral files in the HEASARC archive. The SSS also
provides an example of how background correction files are used in
XSPEC.

To fit the same model with the same set of parameters to more than one
data file, simply enter the names of the data files after the `data`
command:

XSPEC> data sa496b.pha sa496c.pha Net count rate (cts/s) for file 1 0.7806 +/- 9.3808E+05( 86.9% total) using response (RMF) file... sa496b.rsp using background file... sa496b.bck using correction file... sa496b.cor Net count rate (cts/s) for file 2 0.8002 +/- 9.3808E+05( 86.7% total) using response (RMF) file... sa496c.rsp using background file... sa496c.bck using correction file... sa496c.cor Net correction flux for file 1= 8.4469E-04 Net correction flux for file 2= 8.7577E-04 2 data sets are in use

As the messages indicate, XSPEC also has read in the associated:

- response files (
`sa496b.rsp`&`sa496c.rsp`), - background files (
`sa496b.bck`&`sa496c.bck`) and - correction files (
`sa496b.cor`&`sa496c.cor`).

These files are all listed in the headers of the data files
(`sa496b.pha` & `sa496c.pha`).

To ignore channels, the file number (1 & 2 in this example) precedes the range of channels to be ignored. Here, we wish to ignore, for both files, channels 1-15 and channels 100-128. This can be done by specifying the files one after the other with the range of ignored channels:

XSPEC> ignore 1:1-15 1:100-128 2:1-15 2:100-128 Chi-Squared = 1933.559 using 168 PHA bins. Reduced chi-squared = 11.79000 for 164 degrees of freedom Null hypothesis probability = 0.

or by specifying the range of file number with the channel range:

XSPEC> ignore 1-2:1-15 100-128In this example, we'll fit a cooling-flow model under the reasonable assumption that the small SSS field of view sees mostly just the cool gas in the middle of the cluster. We'll freeze the values of the maximum temperature (the temperature from which the gas cools) and of the abundance to the values found by instruments such as the

XSPEC>mo pha(cflow) Model: phabs[1]( cflow[2] ) Input parameter value, delta, min, bot, top, and max values for ... Current: 1 0.001 0 0 1E+05 1E+06 phabs:nH>0.045 Current: 0 0.01 -5 -5 5 5 cflow:slope>0,0 Current: 0.1 0.001 0.0808 0.0808 79.9 79.9 cflow:lowT>0.1,0 Current: 4 0.001 0.0808 0.0808 79.9 79.9 cflow:highT>4,0 Current: 1 0.01 0 0 5 5 cflow:Abundanc>0.5,0 Current: 0 -0.1 0 0 100 100 cflow:Redshift>0.032 Current: 1 0.01 0 0 1E+24 1E+24 cflow:norm>100 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Model: phabs[1]( cflow[2] ) Model Fit Model Component Parameter Unit Value par par comp 1 1 1 phabs nH 10^22 4.5000E-02 +/- 0. 2 2 2 cflow slope 0. frozen 3 3 2 cflow lowT keV 0.1000 frozen 4 4 2 cflow highT keV 4.000 frozen 5 5 2 cflow Abundanc 0.5000 frozen 6 6 2 cflow Redshift 3.2000E-02 frozen 7 7 2 cflow norm 100.0 +/- 0. --------------------------------------------------------------------------- --------------------------------------------------------------------------- Chi-Squared = 2740.606 using 168 PHA bins. Reduced chi-squared = 16.50968 for 166 degrees log none m Null hypothesis probability = 0. XSPEC>fit Chi-Squared Lvl Fit param # 1 2 3 4 5 6 7 414.248 -3 0.2050 0. 0.1000 4.000 0.5000 3.2000E-02 288.5 373.205 -4 0.2508 0. 0.1000 4.000 0.5000 3.2000E-02 321.9 372.649 -5 0.2566 0. 0.1000 4.000 0.5000 3.2000E-02 325.9 372.640 -6 0.2574 0. 0.1000 4.000 0.5000 3.2000E-02 326.3 --------------------------------------------------------------------------- Variances and Principal axes : 1 7 3.55E-05 | -1.00 0.00 3.52E+01 | 0.00 -1.00 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Model: phabs[1]( cflow[2] ) Model Fit Model Component Parameter Unit Value par par comp 1 1 1 phabs nH 10^22 0.2574 +/- 0.9219E-02 2 2 2 cflow slope 0. frozen 3 3 2 cflow lowT keV 0.1000 frozen 4 4 2 cflow highT keV 4.000 frozen 5 5 2 cflow Abundanc 0.5000 frozen 6 6 2 cflow Redshift 3.2000E-02 frozen 7 7 2 cflow norm 326.3 +/- 5.929 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Chi-Squared = 372.6400 using 168 PHA bins. Reduced chi-squared = 2.244819 for 166 degrees of freedom Null hypothesis probability = 6.535E-18

As we can see,
is not good, but the high statistic could
be because we have
yet to adjust the correction file. Correction files in XSPEC take
into account detector features that cannot be completely prescribed *ab initio* and which must be fitted at the same time as the model. *Einstein* SSS spectra, for example, have a background feature the
level of which varies unpredictably. Its spectral form is contained in
the correction file, but its normalization is determined by fitting.
This fitting is set in motion using the command `recornrm` (*reset the correction-file normalization*):

XSPEC>reco 1 File # Correction 1 0.4118 +/- 0.0673 After correction norm adjustment 0.412 +/- 0.067 Chi-Squared = 335.1577 using 168 PHA bins. Reduced chi-squared = 2.019022 for 166 degrees of freedom Null hypothesis probability = 1.650E-13 XSPEC>reco 2 File # Correction 2 0.4864 +/- 0.0597 After correction norm adjustment 0.486 +/- 0.060 Chi-Squared = 268.8205 using 168 PHA bins. Reduced chi-squared = 1.619400 for 166 degrees of freedom Null hypothesis probability = 7.552E-07

This process is iterative, and, in order to work, must be used in tandem with fitting the model. Successive fits and recorrections are applied until the fit is stable, i.e., until further improvement in no longer results. Of course, this procedure is only worthwhile when the model gives a reasonably good account of the data. Eventually, we end up at:

XSPEC>fit Chi-Squared Lvl Fit param # 1 2 3 4 5 6 7 224.887 -3 0.2804 0. 0.1000 4.000 0.5000 3.2000E-02 313.0 224.792 -4 0.2835 0. 0.1000 4.000 0.5000 3.2000E-02 314.5 224.791 -5 0.2837 0. 0.1000 4.000 0.5000 3.2000E-02 314.6 --------------------------------------------------------------------------- Variances and Principal axes : 1 7 4.64E-05 | -1.00 0.00 3.78E+01 | 0.00 -1.00 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Model: phabs[1]( cflow[2] ) Model Fit Model Component Parameter Unit Value par par comp 1 1 1 phabs nH 10^22 0.2837 +/- 0.1051E-01 2 2 2 cflow slope 0. frozen 3 3 2 cflow lowT keV 0.1000 frozen 4 4 2 cflow highT keV 4.000 frozen 5 5 2 cflow Abundanc 0.5000 frozen 6 6 2 cflow Redshift 3.2000E-02 frozen 7 7 2 cflow norm 314.6 +/- 6.147 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Chi-Squared = 224.7912 using 168 PHA bins. Reduced chi-squared = 1.354164 for 166 degrees of freedom Null hypothesis probability = 1.616E-03

The final value of is much better than the original, but is not quite acceptable. However, the current model has only two free parameters: further explorations of parameter space would undoubtedly improve the fit.

We'll leave this example and move on to look at another kind of
simultaneous fitting: one where the same model is fitted to two
different data files. This time, not all the parameters will be
identical. The massive X-ray binary Centaurus X-3 was observed with the
LAC on *Ginga* in 1989. Its flux level before eclipse was much lower
than the level after eclipse. Here, we'll use XSPEC to see whether
spectra from these two phases can be fitted with the same model, which
differs only in the amount of absorption. This kind of fitting relies on
introducing an extra dimension, the *group*, to the indexing of the
data files. The files in each group share the same model but not
necessarily the same parameter values, which may be shared as common to
all the groups or varied separately from group to group. Although each
group may contain more than one file, there is only one
file in each of the two groups in this example. Groups are specified
with the `data` command, with the group number preceding the file
number, like this:

XSPEC> da 1:1 losum 2:2 hisum Net count rate (cts/s) for file 1 140.1 +/- 0.3549 using response (RMF) file... ginga_lac.rsp Net count rate (cts/s) for file 2 1371. +/- 3.123 using response (RMF) file... ginga_lac.rsp 2 data sets are in use

Here, the first group makes up the file `losum.pha`, which contains
the spectrum of all the low, pre-eclipse emission. The second group
makes up the second file, `hisum.pha`, which contains all the high,
post-eclipse emission. Note that file number is ``absolute'' in the sense
that it is independent of group number. Thus, if there were three files
in each of the two groups (`lo1.pha`, `lo2.pha`, `lo3.pha`,
`hi1.pha`, `hi2.pha` & `hi3.pha`, say), rather than one,
the six files would be specified as

`da 1:1 lo1 1:2 lo2 1:3 lo3 2:4 hi1 2:5 hi2 2:6 hi3`

The `ignore` command works, as usual, on file number, and does not
take group number into account. So, to ignore channels 1-3 and
37-48 of both files:

XSPEC> ignore 1-2:1-3 37-48

The model we'll use at first to fit the two files is an absorbed power law with a high-energy cut-off:

XSPEC> mo phabs * highecut (po)

After defining the model, the user is prompted for two sets of parameter
values, one for the first group of data files (`losum.pha`), the
other for the second group (`hisum.pha`). Here, we'll enter the
absorption column of the first group as 10^{24} cm^{-2} and enter
the default values for all the other parameters in the first group. Now,
when it comes to the second group of parameters, we enter a column of
10^{22} cm^{-2} and then enter defaults for the other
parameters. The rule being applied here is as follows: to tie parameters in the
second group to their equivalents in the first group, take the default
when entering the second-group parameters; to allow parameters in the second
group to vary independently of their equivalents in the first group,
enter different values explicitly:

XSPEC>mo phabs*highecut(po) Model: phabs[1]*highecut[2]( powerlaw[3] ) Input parameter value, delta, min, bot, top, and max values for ... Current: 1 0.001 0 0 1E+05 1E+06 DataGroup 1:phabs:nH>100 Current: 10 0.01 0.0001 0.01 1E+06 1E+06 DataGroup 1:highecut:cutoffE>/ Current: 15 0.01 0.0001 0.01 1E+06 1E+06 DataGroup 1:highecut:foldE>/ Current: 1 0.01 -3 -2 9 10 DataGroup 1:powerlaw:PhoIndex>/ Current: 1 0.01 0 0 1E+24 1E+24 DataGroup 1:powerlaw:norm>/ Current: 100 0.001 0 0 1E+05 1E+06 DataGroup 2:phabs:nH>1 Current: 10 0.01 0.0001 0.01 1E+06 1E+06 DataGroup 2:highecut:cutoffE>/* --------------------------------------------------------------------------- --------------------------------------------------------------------------- Model: phabs[1]*highecut[2]( powerlaw[3] ) Model Fit Model Component Parameter Unit Value Data par par comp group 1 1 1 phabs nH 10^22 100.0 +/- 0. 1 2 2 2 highecut cutoffE keV 10.00 +/- 0. 1 3 3 2 highecut foldE keV 15.00 +/- 0. 1 4 4 3 powerlaw PhoIndex 1.000 +/- 0. 1 5 5 3 powerlaw norm 1.000 +/- 0. 1 6 6 4 phabs nH 10^22 1.000 +/- 0. 2 7 2 5 highecut cutoffE keV 10.00 = par 2 2 8 3 5 highecut foldE keV 15.00 = par 3 2 9 4 6 powerlaw PhoIndex 1.000 = par 4 2 10 5 6 powerlaw norm 1.000 = par 5 2 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Chi-Squared = 2.0263934E+07 using 66 PHA bins. Reduced chi-squared = 337732.2 for 60 degrees of freedom Null hypothesis probability = 0.

Notice how the summary of the model, displayed immediately above, is
different now that we have two groups, as opposed to one (as in all the
previous examples). We can see that of the 10 model parameters, 6 are
free (i.e., 4 of the second group parameters are tied to their
equivalents in the first group). Fitting this model results in a huge
(not shown here), because our assumption that only a change in
absorption can account for the spectral variation before and after
eclipse is clearly wrong. Perhaps scattering also plays a role in
reducing the flux before eclipse. This could be modeled (simply at
first) by allowing the normalization of the power law to be smaller
before eclipse than after eclipse. To decouple tied parameters, we
change the parameter value in the second group to a value--any
value--different from that in the first group (changing the value in
the first group has the effect of changing both without decoupling). As
usual, the `newpar` command is used:

XSPEC>newpar 10 1 7 variable fit parameters Chi-Squared = 2.0263934E+07 using 66 PHA bins. Reduced chi-squared = 343456.5 for 59 degrees of freedom Null hypothesis probability = 0. XSPEC>fit ... --------------------------------------------------------------------------- Model: phabs[1]*highecut[2]( powerlaw[3] ) Model Fit Model Component Parameter Unit Value Data par par comp group 1 1 1 phabs nH 10^22 20.23 +/- 0.1823 1 2 2 2 highecut cutoffE keV 14.68 +/- 0.5552E-01 1 3 3 2 highecut foldE keV 7.430 +/- 0.8945E-01 1 4 4 3 powerlaw PhoIndex 1.187 +/- 0.6505E-02 1 5 5 3 powerlaw norm 5.8958E-02 +/- 0.9334E-03 1 6 6 4 phabs nH 10^22 1.270 +/- 0.3762E-01 2 7 2 5 highecut cutoffE keV 14.68 = par 2 2 8 3 5 highecut foldE keV 7.430 = par 3 2 9 4 6 powerlaw PhoIndex 1.187 = par 4 2 10 7 6 powerlaw norm 0.3123 +/- 0.4513E-02 2 --------------------------------------------------------------------------- --------------------------------------------------------------------------- Chi-Squared = 15424.73 using 66 PHA bins. Reduced chi-squared = 261.4362 for 59 degrees of freedom Null hypothesis probability = 0.

After fitting, this decoupling reduces by a factor of six to 15,478, but this is still too high. Indeed, this simple attempt to account for the spectral variability in terms of ``blanket'' cold absorption and scattering does not work. More sophisticated models, involving additional components and partial absorption, should be investigated.