next up previous contents
Next: 11. An RGS Data Up: XMM ABC Guide Previous: 9. An EPIC Data   Contents

Subsections


10. An RGS Data Processing and Analysis Primer (Command Line)

Many files are associated with an RGS dataset, and it is easy to be overwhelmed. The INDEX.HTM file, and links therein, are viewable with a web browser and will help you navigate the dataset. The different types of files are discussed in more detail in Chapter 1.

As ever, it is strongly recommended that you keep all reprocessed data in its own directory! SAS places output files in whichever directory it is in when a task is called. Throughout this primer, it is assumed that the Pipleline Processed data are in the PPS directory, the ODF data (with upper case file names, and uncompressed) are in the directory ODF, the reprocessing and analysis is taking place in the PROC directory, and the CCF data are in the directory CCF.

If you have just received your data from the SOC, it has been processed with the most recent version of SAS, and you should not need to repipeline it (though no harm is done if you do); you need only to gunzip the files and prepare the data for processing (see §5). However, it is very likely that you will want to filter your data; in this case, you will need to reprocess it in order to determine the appropriate filters. Therefore, we recommend that you rerun the pipeline regardless of the age of your dataset.

But if you decide that reprocessing is unnecessary, you need only to gunzip the files and rename event files for easier handling. For example, for the RGS1 event list,

cp PPS/PiiiiiijjkkR1lEVENLInmmm.FTZ PROC/r1_evt.fits

where

iiiiiijjkk - observation number
l - scheduled (S) or unscheduled (U) obseravtion
n - spectral order number
mmm - source number

As noted in Tables 3.2 and 3.3 you can view images of your data. While the zipped FITS files may need to be unzipped before display in ds9 (depending on the version of ds9), they can be displayed when zipped using fv. As usual, there are some HTML products to help you inspect the data. These have file names of the form:

PPiiiiiijjkkAAAAAA000_0.HTM

where

iiiiiijjkk - Observation number
jj - observation ID - target number in proposal
kk - observation ID - observation number for target
AAAAAA - Group ID (see Table 3.2)

You will find a variety of RGS-specific files in XMM-Newton data sets. Generally there are two of each because there are two RGS instruments. Table 3.3 lists typical file names, their purpose, the file format, and a list of tools that will enable the user to inspect their data. Remember that the INDEX.HTM file will help you navigate.

Various analysis procedures are demonstrated using the Mkn 421 dataset, ObsID 0153950701. The following procedures are applicable to all XMM-Newton datasets, so it is not required that you use this particular dataset; any observation should be sufficient.


10.1 Rerun the Pipeline on the Command Line

We assume that the data was prepared and environment variables were set according to §5. In the window where SAS was initialized, in your ``processing directory'' PROC, run the task rgsproc:

rgsproc orders='1 2' bkgcorrect=no withmlambdacolumn=yes spectrumbinning=lambda

where

orders - dispersion orders to extract
bkgcorrect - subtract background from source spectra?
withmlambdacolumn - include a wavelength column in the event file product
spectrumbinning - accumulate the spectrum either in wavelength or beta space

Note the last keyword, spectrumbinning. If you want to merge data from the same orders in RGS1 and RGS2, keep it at the default value lambda. If you want to merge data from the same instrument, with different orders, set it to beta. Merging spectra is discussed in §10.6.

This takes several minutes, and outputs 12 files per RGS, plus 3 general use FITS files. At this point, renaming files to something easy to type is a good idea.

ln -s *R1*EVENLI*FIT r1_evt1.fits
ln -s *R2*EVENLI*FIT r2_evt1.fits


10.1.1 Potentially useful tips for using the pipeline

The pipeline task, rgsproc, is very flexible and can address potential pitfalls for RGS users. In §10.1, we used a simple set of parameters with the task; if this is sufficient for your data (and it should be for most), feel free to skip to later sections, where data filters are discussed. In the following subsections, we will look at the cases of a nearby bright optical source, a nearby bright X-ray source, and a user-defined source.


10.1.2 A Nearby Bright Optical Source

With certain pointing angles, zeroth-order optical light may be reflected off the telescope optics and cast onto the RGS CCD detectors. If this falls on an extraction region, the current energy calibration will require a wavelength-dependent zero-offset. Stray light can be detected on RGS DIAGNOSTIC images taken before, during and after the observation. This test, and the offset correction, are not performed on the data before delivery. Please note that this will not work in every case. If a source is very bright, the diagnostic data that this relies on may not have been downloaded from the telescope in order to save bandwidth. Also, the RGS target itself cannot be the source of optical photons, as the spectrum's zero-order falls far from the RGS chip array. To check for stray light and apply the appropriate offsets, type

rgsproc orders='1 2' bkgcorrect=no calcoffsets=yes withoffsethistogram=no

where the parameters are as described in §10.1 and

calcoffsets - calculate PHA offsets from diagnostic images
withoffsethistogram - produce a histogram of uncalibrated excess for the user


10.1.3 A Nearby Bright X-ray Source

In the example above, it is assumed that the field around the source contains sky only. Provided a bright background source is well-separated from the target in the cross-dispersion direction, a mask can be created that excludes it from the background region. Here the source has been identified in the EPIC images and its coordinates have been taken from the EPIC source list which is included among the pipeline products. The bright neighboring object is found to be the third source listed in the sources file. The first source is the target:

rgsproc orders='1 2' bkgcorrect=no withepicset=yes \
$   $ epicset=PiiiiiijjkkaablllEMSRLInmmm.FTZ exclsrcsexpr='INDEX==1&&INDEX==3'

where the parameters are as described in §10.1 and

withepicset - calculate extraction regions for the sources contained in an EPIC source list
epicset - name of the EPIC source list, such as generated by emldetect or eboxdetect procedures
exclsrcsexpr - expression to identify which source(s) should be excluded from the background extraction region


10.1.4 User-defined Source Coordinates

If the true coordinates of an object are not included in the EPIC source list or the science proposal, the user can define the coordinates of a new source by typing:

rgsproc orders='1 2' bkgcorrect=no withsrc=yes srclabel=Mkn421 srcstyle=radec\
$   $ srcra=166.113808 srcdec=+38.208833

where the parameters are as described in §10.1 and

withsrc - make the source be user-defined
srclabel - source name
srcstyle - coordinate system in which the source position is defined
srcra - the source's right ascension in decimal degrees
srcdec - the source's declination in decimal degrees

Since the event files are current, we can proceed with some simple analysis demonstrations, which will allow us to generate filters. Rememer that all tasks should be called from the window where SAS was initiated, and that tasks place output files in whatever directory you are in when they are called.


10.2 Create and Display an Image

Two commonly-made plots are those showing PI vs. BETA_CORR (also known as ``banana plots'') and XDSP_CORR vs. BETA_CORR.

To create images on the command line, type

evselect table=r1_evt1.fits:EVENTS withimageset=yes \
$   $ imageset=pi_bc.fits xcolumn=BETA_CORR ycolumn=PI \
$   $ imagebinning=imageSize ximagesize=600 yimagesize=600

where

table - input event table
withimageset - make an image
imageset - name of output image
xcolumn - event column for X axis
ycolumn - event column for Y axis
imagebinning - form of binning, force entire image into a given size or bin by a specified number of pixels
ximagesize - output image pixels in X
yimagesize - output image pixels in Y

Plots comparing BETA_CORR to XDSP_CORR may be made in a similar way. The output files can be viewed by using a standard FITS display, such as ds9 (see Figure 10.1).

Figure 10.1: Plots of PI vs. BETA_CORR (left) and XDSP vs. BETA_CORR (right).

\includegraphics[scale=0.5]{pi_bc.ps} \includegraphics[scale=0.5]{xc_bc.ps}


10.3 Create and Display a Light Curve

The background is assessed through examination of the light curve. We will extract a region, CCD9, that is most susceptible to proton events and generally records the least source events due to its location close to the optical axis. Also, to avoid confusing solar flares for source variability, a region filter that removes the source from the final event list should be used. The region filters are kept in the source file product *SRCLI_*.FIT.

Please be aware that with SAS 13, the *SRCLI_*.FIT file's column information changed. rgsproc now outputs an M_LAMBDA column instead of BETA_CORR, and M_LAMBDA should be used to generate the light curve. (The *SRCLI_*.FIT file that came with the PPS products still contains a BETA_CORR column if you prefer to use that instead.)

To create a light curve, type

evselect table=r1_evt1.fits withrateset=yes rateset=r1_ltcrv.fits \
$   $ maketimecolumn=yes timebinsize=100 makeratecolumn=yes\
$   $ expression= \
$   $ '(CCDNR==9)&&(REGION(P0153950701R1S001SRCLI_0000.FIT:RGS1_BACKGROUND,M_LAMBDA,XDSP_CORR))'

where

table - input event table
withrateset - make a light curve
rateset - name of output light curve file
maketimecolumn - control to create a time column
timebinsize - time binning (seconds)
makeratecolumn - control to create a count rate column, otherwise a count column will be created
expression - filtering criteria

The output file r1_ltcrv.fits can be viewed using dsplot:

dsplot table=r1_ltcrv.fits x=TIME y=RATE &

where

table - input event table
x - column for plotting on the X axis
y - column for plotting on the Y axis

The light curve is shown in Figure 10.2.

Figure 10.2: The event rate from the RGS1 CCD9 chip. The time units are elapsed mission time.

\includegraphics[scale=0.5]{rgs_ltcrv.eps}


10.4 Generating the Good Time Interval (GTI) File

Examination of the lightcurve shows that there is a loud section at the end of the observation, after 1.36975e8 seconds, where the count rate is well above the quiet count rate of $\sim $ 0.05-0.2 count/second. To remove it, we need to make an additional Good Time Interval (GTI) file and apply it by rerunning rgsproc.

There are two tasks that make a GTI file: gtibuild and tabgtigen. Either will produce the needed file, so which one to use is a matter of the user's preference. Both are demonstrated below.


10.4.1 Filter on Time with gtibuild

The first method, using gtibuild, requires a text file as input. In the first two columns, refer to the start and end times (in seconds) that you are interested in, and in the third column, indicate with either a + or - sign whether that region should be kept or removed. Each good (or bad) time interval should get its own line. Comments can also be entered, if they are preceeded by a "#". In the example case, we would write in our ASCII file (named gti.txt):

1.36958e8 1.36975e8 + # The quiet part only, please.

and proceed to the SAS task gtibuild:

gtibuild file=gti.txt table=gti.fits

where

file - intput text file
table - output gti table


10.4.2 Filter on Time with tabgtigen

Alternatively, we could filter on time using tabgtigen using the filtering expression from the times noted previously:

tabgtigen table=r1_ltcrv.fits gtiset=gti.fits timecolumn=TIME \
expression='(TIME in [1.36958e8:1.36975e8])'

where

table - input data set (the light curve)
gtiset - output GTI file name
timecolumn - name of column that contains the time stamps
expression - filtering expression


10.4.3 Filter on Rate with tabgtigen

Finally, we could filter on rate using tabgtigen. The typical quiet count rate for this observation is about 0.05 ct/s, so we would have:

tabgtigen table=r1_ltcrv.fits gtiset=gti.fits expression='(RATE$<=$0.2)'

where

table - the lightcurve file
gtiset - output gti table
expression - the filtering criteria. Since the nominal count rate is 0.05 about count/sec, we have set the upper limit to 0.2 count/sec.


10.4.4 Apply the new GTI

Now that we have GTI file, we can apply it to the event file by running rgsproc again. rgsproc is a complex task, running several steps, with five different entry and exit points. It is not necessary to rerun all the steps in the procedure, only the ones involving filtering.

To apply the GTI to the event file, type

rgsproc orders='1 2' auxgtitables=gti.fits bkgcorrect=no \
$   $ withmlambdacolumn=yes entrystage=3:filter finalstage=5:fluxing

where

orders - spectral orders to be processed
auxgtitables - gti file in FITS format
bkgcorrect - subtract background from source spectra?
withmlambdacolumn - include a wavelength column in the event file product
entrystage - stage at which to begin processing
finalstage - stage at which to end processing

Since we made a soft link to the event file in §10.1, there is no need to rename it.


10.5 Creating the Response Matrices (RMFs)

Response matrices (RMFs) are now provided as part of the pipeline product package, but you might want create your own. The task rgsproc generates a response matrix automatically, but as noted in §10.1.4, the source coordinates are under the observer's control. The source coordinates have a profound influence on the accuracy of the wavelength scale as recorded in the RMF that is produced automatically by rgsproc, and each RGS instrument and each order will have its own RMF.

Making the RMF is easily done with the package rgsrmfgen. Please note that, unlike with EPIC data, it is not necessary to make ancillary response files (ARFs).

To make the RMFs, type

rgsrmfgen spectrumset=P0153950701R1S001SRSPEC1001.FIT rmfset=r1_o1_rmf.fits \
$   $ evlist=r1_evt1.fits emin=0.4 emax=2.5 rows=4000

where

spectrumset - spectrum file
evlist - event file
emin - lower energy limit of the response file
emax - upper energy limit of the response file
rows - number of energy bins; this should be greater than 3000
rmfset - output FITS file

RMFs for the RGS1 2nd order, and for the RGS2 1st and 2nd orders, are made in a similar way.

At this point, the spectra can be analyzed or combined with other spectra.


10.6 Combining Spectra

Spectra from the same order in RGS1 and RGS2 can be safely combined to create a spectrum with higher signal-to-noise if they were reprocessed using rgsproc with spectrumbinning=lambda, as we did in §10.1 (this also happens to be the default). (Spectra of different orders, from one particular instrument, can also be merged if the were reprocessed using rgsproc with spectrumbinning=beta.) The task rgscombine also merges response files and background spectra. When merging response files, be sure that they have the same number of bins. For this example, we assume that RMFs were made for order 1 in both RGS1 and RGS2 with rgsproc.

To merge the first order RGS1 and RGS2 spectra, type

rgscombine pha='P0153950701R1S001SRSPEC1001.FIT P0153950701R2S002SRSPEC1001.FIT'
$   $ rmf='P0153950701R1S001RSPMAT1001.FIT P0153950701R2S002RSPMAT1001.FIT'
$   $ bkg='P0153950701R1S001BGSPEC1001.FIT P0153950701R2S002BGSPEC1001.FIT'
$   $ filepha='r12_o1_srspec.fits' filermf='r12_o1_rmf.fits'
$   $ filebkg='r12_o1_bgspec.fits' rmfgrid=5000

where

pha - list of spectrum files
rmf - list of response matrices
bkg - list of bakcground spectrum files
filepha - output merged spectrum
filermf - output merged response matrix
filebkg - output merged badkground spectrum
rmfgrid - number of energy bins; should be the same as the input RMFs

The spectra are ready for analysis, so skip ahead to prepare the spectrum for fitting (§14).


next up previous contents
Next: 11. An RGS Data Up: XMM ABC Guide Previous: 9. An EPIC Data   Contents
Lynne Valencic 2023-06-29