Standard Pipeline Processing of an Observation with "nicerl2"

Overview

The NICER team provides a standard way to process a NICER observation. This thread describes how to perform standard processing with the task "nicerl2."

Read this thread if you want to: Perform standard analysis of a NICER observation.

Last update: 2024-02-14

Introduction

The NICER team provides a way to process a NICER observation. We call that a "pipeline" processing script because it is meant to run in a standard, consistent way for every observation, with little manual intervention required.

The task nicerl2 performs the NICER team-recommended calibration and screening steps for NICER data.

The standard pipeline processing tool for NICER data is called "nicerl2". It is included in the HEASoft software package, along with all standard NICER analysis tools.

The task nicerl2 is designed to perform "Level 2" analysis, which includes standard calibration, screening and filtering of data. The end result is an updated filter file (niNNNNNNNNN.mkf) as well as a cleaned event list (niNNNNNNNNNN_0mpu7_cl.evt). Here, NNNNNNNNNN is the 10-digit observational ID of the dataset (we use NNNNNNNNNN = 1234567890 below as an example).

The outputs of nicerl2 are used for subsequent analysis, such as light curve and spectral extraction. Please note that these later standard product steps are considered "Level 3" and not part of nicerl2's task list. This thread is about Level 2 only.

For more information about NICER standard processing levels, please see the NICER Processing Levels thread.

Reasons to Run nicerl2

The primary reason to run nicerl2 is to take advantage of newer NICER calibration files, or improved screening and filtering algorithms.

It is worth applying nicerl2 to any observation when you know that newer calibration is available since the last time it was processed.

When starting a project, always process NICER data using up-to-date software and calibration, even for data downloaded from the NICER archive.

Data from the NICER archive have been processed by a standard pipeline processing algorithm very similar to nicerl2. Over time, calibration and software improves, but these improvements are not regularly applied to old data in the archive. Therefore, it is always recommended to run nicerl2 on fresh data that you obtain from the NICER archive.

The same reasoning applies to standard screening criteria.

Finally, if you want to add some columns to your filter file (niNNNNNNNNNN.mkf), to use a specific background model, you may need to re-run nicerl2.

Inputs

The input to "nicerl2" is a single observation dataset. As downloaded from the NICER archive, each NICER dataset is called an "observation segment" and covers no more than one calendar day.

If your scientific observation spans multiple observation segments, then you will have to run nicerl2 for each of these.

Prerequisites

Here is what is needed:

  • A single NICER observational dataset. It will have a ten-digit directory name. For this example, we will use the observation directory name 1234567890.
  • Up-to-date NICER analysis environment, including software and calibration. See the NICER Setup thread for instructions.

Initial Steps

Change into the auxil directory within the observation directory. cd /path/to/your/data where /path/to/your/data is the location of your observation directory.

Run nicerl2

Run nicerl2 on your observational dataset. Below is the most simple way to run it. nicerl2 indir=1234567890 clobber=YES

where

  • indir=1234567890 is the the name of your observation directory
  • clobber=YES means nicerl2 will overwrite existing filter files and higher level event files

When nicerl2 has completed without errors, you should find a revised filter file as well as new "ufa" and "cl"eaned event files.

  • NNNNNNNNNN/auxil/niNNNNNNNNNN.mkf - filter file
  • NNNNNNNNNN/xti/event_cl/niNNNNNNNNNN_0mpuN_ufa.evt - calibrated but unscreened event files per-MPU
  • NNNNNNNNNN/xti/event_cl/niNNNNNNNNNN_0mpu7_cl.evt - cleaned and screened full array event file
These files can be used in subsequent analysis.

Simple Scripts to Reprocess Many Observations

In many cases, users will want to reprocess many data sets at a time, rather than typing each command manually at a terminal window. We can use a simple shell (bash) script to achieve. This example, we will make a script that processes every observation in the current directory.

Prerequisite. All observation datasets should be in the same directory. For example, cd /path/to/my/data ls 1234567890 1234567891 ... 1234567899 All of the observation datasets a distinguishable with a 10-digit directory name.

We create a script of the following form called nicer2-all.sh: #!/bin/sh for obsid in [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]; do nicerl2 indir=$obsid clobber=YES done This file contains a bash "for" loop that loops through every obsid that corresponds to a 10 digit number. Note that you can add an additional options to the nicer2 command line, such as incremental=YES, cor_range="1.5-*", chatter=3, and so on.

In the shell, we need to make this script executable with the Unix "chmod" command: chmod ugo+x ./nicer2-all.sh The "ugo+x" makes nicer2-all.sh executable by anyone.

Now we simply run this script: ./nicer2-all.sh |& tee nicer2-all.log The notation at the end of the command saves all logging information to nicer-2-all.log, as well as to the screen.

As a variation, if you are planning to send your data to a different directory than the observation data, you can do this as well. This script is slightly more complicated because we have to make an output directory for the results to be placed and copy the filter file there. Here is an example script: #!/bin/sh mkdir -p $output for obsid in [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]; do mkdir -p $output/$obsid/ if [ ! -f $output/$obsid/ni${obsid}.mkf ]; then rm -f $output/$obsid/ni${obsid}.mkf* cp -p $obsid/auxil/ni${obsid}.mkf* $output/$obsid/ gunzip -f -d $output/$obsid/ni${obsid}.mkf.gz fi nicerl2 indir=$obsid clobber=YES \ cldir=$output/$obsid \ mkfile='$CLDIR/ni$OBSID.mkf' done The extra code at the top creates a directory where the output will reside, in this case, a subdirectory named "output" which will have subdirectories "output/1234567890", "output/1234567891" and so on.

Dealing With Data Affected by the Optical Light Leak

In May, 2023, NICER experienced damage to its thermal / optical blocking films. The impact of this damage is to dramatically increase the chances of optical light entering the NICER detector volume, especially during orbit day. Optical light tends to cause increased noise and gain shifts, and the amount of these shifts is strong enough that NICER was forced to adjust its low energy threshold for observations taken during orbit day.

For scientists, this change in threshold mean that NICER's low energy limit is raised from about 0.25 to about 0.38 keV.

Scientists whose targets have emission in the 0.25-0.45 keV range will be impacted by the threshold changes during orbit day, for data taken after the optical light leak of May 2023. They can use the strategy below to recover science from both orbit night and orbit day portions of their data.

As of HEASoft 6.33, some NICER tools are able to accomodate threshold changes. However, this capability is somewhat limited. The response generator, nicerrmf, is able to generate a response for any combination of orbit day and night. However, the SCORPEON background model is only able to generate background estimates for one single threshold setting, either orbit day or orbit night.

The 3C50 and Space Weather background models have no accomodation for threshold changes, and for data taken after the optical light leak, these models are only applicable to data taken during orbit night, or if you care about background subtraction for energies above ~0.5 keV where the threshold setting is unimportant.

Based on these parameters, if you care about the energy range 0.25-0.5 keV for data taken during orbit day after the optical light leak, it is best to prepare your data for use with the SCORPEON model, and to separate data into orbit night and orbit day portions.

Starting with HEASoft 6.33 / NICERDAS 12 some shortcuts are provided to help you prepare data in this way.

If you are processing data before the optical light leak from May 2023, you do not have to take these extra actions.

We recommend that you run nicerl2 once, with all the desired settings. nicerl2 NNNNNNNNN myobs ... clobber=YES where "..." are whichever extra parameters you wish to set. By default, nicerl2 uses the "night" threshold setting.

Then after this is complete you can re-run processing and select day-only data. nicerl2 NNNNNNNNNN ... tasks=SCREEN threshfilter=DAY clfile='$CLDIR/ni$OBSID_0mpu7_cl_day.evt' clobber=YES

The tasks=SCREEN parameter tells nicerl2 to only perform the screening step, and none of the other steps. This means that nicerl2 will not have to re-perform many of the time consuming steps and the results will appear much more quickly than after the first run. See the next section about other ways you can make nicerl2 run more quickly.

The threshfilter=DAY parameter instructs nicerl2 (and nimaketime) to retrieve only data taken during the standard orbit day time ranges. The results are stored in a separate event file in your cleaned event directory, called niNNNNNNNNNN_0mpu7_day.evt (for the day-only), where NNNNNNNNNN is the observation ID. Now you have separated the two portions of data and can process them separately with downstream software.

When processing with downstream software such as nicerl3-spect or nicerl3-lc, you can specify explicitly the name of your cleaned event file.

The first run looks like the normal invocation. nicerl3-spect NNNNNNNNNN ... suffix=_night and the second run picks up the day-only data. nicerl3-spect NNNNNNNNNN ... clfile='$CLDIR/ni$OBSID_0mpu7_cl_day.evt' suffix=_day Here we have run nicerl3-spect, and used the clfile parameter to specify the cleaned event file explicitly to pick up the day-only cleaned event file we made above. We have also used the suffix parameter so that all product filenames have the "night" or "day" suffix attached to them. Note that a similar process can be used for nicerl3-lc.

Making nicerl2 Run Faster

By default, nicerl2 runs all calibration and screening steps each time when processing data in order to obtain the most up to date and recommend processing strategies.

When starting a project, it is recommended to fully run nicerl2 to completion. While this may take a significant amount of processing time to complete, it does mean that the user can be confident that all products are up to date before proceeding with science analysis.

However, there are some times when the user may want to "reprocess" their data. Usually they want to make small variations on data selections, and don't need to re-run the entire pipeline from scratch. In that case, re-running "nicerl2" may be overkill and incur significant (and unneeded) processing overhead. The NICER software has several ways to speed up execution in this case.

The 'incremental=YES' parameter. As of HEASoft 6.31, nicerl2 has a parameter 'incremental' which causes nicerl2 to only perform some processing processing if necessary.

Specifically, before calibrating the data, nicerl2 will check any existing output files and if the calibration metadata indicates that the files have current calibration, they will be used as-is without reprocessing.

Similarly, the existing filter file is checked before processing. If it has been processed with an up-to-date version of nicerl2 (niprefilter2) with the same requested columns, regeneration of the filter file is not necessary.

Some steps, such as extracting a merged "ufa" file and screening of data, are always performed regardless of the incremental=YES setting.

If you want to force full processing, set incremental=NO, which is the default. The NICER team recommends that you do this when upgrading software or calibration, or when starting a new project.

The 'tasks' parameter. As of HEASoft 6.29, nicerl2 has a parameter called "tasks." The tasks parameter allows the user to specify which tasks to do explicitly. Essentially, while incremental=YES is automatic, the tasks parameter is a more manual way to specify what nicerl2 does. The user will need to know what steps they want to perform rather than letting nicerl2 decide.

The tasks parameter is a comma-separated list of processing items to complete. Please see the nicerl2 help file for more information but here are two common usage cases:

  • tasks=ALL This setting is the default, and will perform all processing steps.
  • tasks=SCREEN This setting will will only perform data screening (nimaketime and nicermergeclean) steps. You would do this if you know that you have up-to-date calibration, merged event files, and filter file, and simply want to generate a cleaned event file based on new screening criteria.

You can use both the 'incremental' and the 'tasks' parameters simultaneously, although it is not recommended. When you do this, nicerl2 will perform the more restrictive set of operations. For, example if you set "tasks=MKF incremental=YES", nicerl2 will skip the filter file (MKF file) generation step if the MKF file is already up to date.

The tasks parameter can be used to test different screening strategies. In a typical workflow, you would run nicerl2 once with tasks=ALL, and then run nicerl2 as many times as necessary afterward with tasks=SCREEN while experimenting with different screening techniques. Every time you run with tasks=SCREEN, nicerl2 uses the pre-existing calibrated event files and MKF files, so the screening step runs very quickly.

For example, you could first run nicerl2 once as desired nicerl2 NNNNNNNNNN ... clobber=YES and this would do everything including calibration, MKF and screening. Now you can begin experimenting, nicerl2 NNNNNNNNNN tasks=SCREEN cor_range=1.5-20 ... clobber=YES nicerl2 NNNNNNNNNN tasks=SCREEN cor_range=2.0-20 ... clobber=YES nicerl2 NNNNNNNNNN tasks=SCREEN cor_range=2.5-20 ... clobber=YES ... and so on experimenting with whatever you want to... and with the tasks=SCREEN you only incur the processing time of screening, not the previous steps. If you want to save the original "cl" file your subsequent runs can set clfile='$CLDIR/test.evt' to indicate a scratch output event file instead of the "official" name.

How to Prevent Writing to Observation Directory / Dealing With Read-Only Observation Data

Note that this task in its default operation will attempt to write to the observational directory. Some "old" files may be altered or removed.

Any pre-existing files in these areas will not be destructively overwritten unless clobber=YES (the default is clobber=NO). This is a reminder to the user that destructive removal of data is a possibility and that the user must explicitly allow it by setting clobber=YES.

For most typical applications, where the user is working in an existing NICER observation directory with write access, clobber=YES should be used on the command line to allow nicerl2 to overwrite existing data, with the understanding that pre-existing Level 2 derived files are likely be destroyed.

In some cases users may be working with a read-only archive copy of NICER observation data, with no possibility of modifying the data in place. In that case, even if clobber=YES is used, nicerl2 will be unable to run using the defaults.

Another slightly similar use case would be if the user is trying different (or new) calibrations and wishes to preserve the pre-existing calibrated outputs.

To accomodate these cases, the user must still have a writable output directory, which can be kept separate from the input directory. Let's say that indir=/archive/1234567890 and the output is placed in /writable/1234567890-output.

The steps to achieve this are:

  • Make writable output directory /writable/1234567890-output
  • Make working copy of .mkf file in output directory
  • Run nicerl2 with mkfile and cldir options

Here are the commands to be used in this case. mkdir /writable/1234567890-output cp -p /archive/1234567890/auxil/ni*.mkf* /writable/1234567890-output/ nicerl2 indir=/archive/1234567890 \ cldir=/writable/1234567890-output \ mkfile='$CLDIR/ni$OBSID.mkf'

When complete, the new output files will be placed in the /writable area, and the /archive area will be left untouched.

Variations: Selecting Columns for Background Models

As of NICERDAS 10, there are three available NICER background models to choose from. These are the SCORPEON, 3C50 and Space Weather models. More information on these tools can be found on the NICER Background Estimator Tools page.

In previous versions of NICERDAS, users were required to manually add columns to the filter file in order to make background modeling work properly. For example, the Space Weather model requires the "KP" column, whereas the 3C50 model requires other specific housekeeping count rate columns.

Starting with NICERDAS 10, all of the needed columns are added by default, as long as the default filtcolumns=NICERV4 (or higher version) is used. For that reason, no special command line options are required any more to support background modeling. This is done automatically.

Since some of these columns are deal with geomagnetic quantities, you will need to download and update geomagnetic data when processing NICER data. More information is on the Geomagnetic Thread. As long as you set your GEOMAG_PATH environment variable to point to the directory where geomagnetic data is stored, nicerl2 will automatically use it.

Variations: Dealing with Data Processed with Older Versions of nicerl2

Some NICER analysts have built up a large database of NICER observational datasets. They may wish to merge these datasets together to gain the power of large exposures. Over time, additional columns have been added to the filter files by the standard processing. Users who are merging may not wish that, because it is impossible to use "ftmerge" to merge filter files with different kinds of columns.

One solution to this is to simply reprocess all datasets with modern calibration and modern version of nicerl2. Please see above for this method.

Another solution is to process the "new" data with "old" columns. As of HEASoft 6.29, there is a way to do this. This version of NICER tools added some new columns to the standard filter file. In order to track this you can use the filtcolumns parameter of nicerl2.

  • HEASoft 6.26 and earlier - filtcolumns=NICERV1
  • HEASoft 6.27 and later - filtcolumns=NICERV2
  • HEASoft 6.29 and later - filtcolumns=NICERV3
  • HEASoft 6.30 and later - filtcolumns=NICERV4
  • HEASoft 6.32 and later - filtcolumns=NICERV5
Please see the Filter Files Thread and the help file of niprefilter for more description of these fields.

Thus, for a user that has processed most of there data with HEASoft 6.26 or earlier, they can run nicerl2 as follows. nicerl2 indir=1234567890 clobber=YES filtcolumns=NICERV1 in order to obtain the old filter file columns. However, when starting a project, `it is recommended to reprocess all data with the newest software and calibration versions because this allows you to access new features such as response and background generators.1

Next Steps

The next steps will be to use the products of nicerl2 for light curve and spectral extraction. Please see the spectral and light curve product threads for more information.

Modifications

  • 2020-08-05 - describe more about filter file column changes
  • 2020-05-27 - minor updates
  • 2020-04-24 - initial draft
  • 2021-04-16 - add navigation bar
  • 2022-10-20 - update for HEASoft 6.31
  • 2023-08-01 - additional discussion and examples for tasks=SCREEN
  • 2024-02-13 - section on how to deal with threshold changes after optical light leak