NuSTAR Analysis

HEASoft, the FTOOLS and XANADU software package maintained at the HEASARC, is recommended for the analysis of NuSTAR data. The current version of HEASoft contains the NuSTAR subpackage of tasks - NuSTARDAS - which, together with the existing FTOOLS and XANADU tasks, will enable users to do a complete analysis of NuSTAR datasets, with perhaps a few exceptions. Please refer to the HEASoft release to ensure that you are running the latest version of the HEASoft with any and all feature updates and bug fixes to NUSTARDAS. The release history of NUSTARDAS and associated CALDB is listed here.

For more detailed information on NuSTARDAS, see the NuSTAR data analysis software users guide, v1.9.7 (1.2 MB PDF file) and the NuSTAR data analysis quickstart guide (0.8 MB PDF file).

For information on additional, user-contributed NuSTAR software for performing timing analysis, analysis of stray light sources, and background modeling, see here.

For information about selecting background filtering modes, see the NuSTAR FAQ page and this Caltech NuSTAR page.

Notes for Specific NuSTAR Datasets

As of the date of the 3rd NuSTAR data release on February 5th, 2014, the notes for specific NuSTAR ObsID's that used to be found on this page are now captured in several parameter for those ObsIDs in the NUMASTER table. Potentially problematical issues are indicated by the parameter issue_flag in the NUMASTER table having a value set to 1. The specific nature of the issue affecting the observation will be indicated in one or more non-blank or non-zero values of the following parameters in NUMASTER: data_gap, nupsdout or solar_activity. In addition, the coordinated parameter in NUMASTER lists observatories for which coordinated observations were made with the specified NuSTAR observation. The comments parameter in NUMASTER contains a brief text synopsis of the major known issues and unusual features.

NuSTAR Calibration, Background-subtraction, and Clock Timing Information

Documentation with information about the calibration of the NuSTAR observatory is available at Madsen et al. (2015, ApJS, 220, 8). A detailed examination of the components of the NuSTAR background can be found in the appendix of Wik et al. (2014, ApJ, 792, 48), and the software used in the latter reference is available as user-contributed software at the NuSTAR GitHub site.

The event timing accuracy of the NuSTAR mission has improved from +/- 2 ms to better than 100 μs. This has been achieved by modeling the drifts in the clock timing caused by temperature variations onboard the NuSTAR spacecraft and is available for the entire mission. An updated clock correction file is made available through bi-weekly CALDB updates. Note that NuSTARDAS v1.9.2 (and later) is required to use the more accurate clock correction files (from CALDB v20200429 onwards). Information on the timing improvement as well as how to use the clock correction file can be found on the NuSTAR SOC website. A high-frequency pulsation search in NuSTAR event data can often observe a strong oscillation close to  1/1.123s = 890.5 Hz and/or possible harmonics and aliases of this frequency. These are an instrumental signature of the "CPMODE reset" in the NuSTAR detectors. For more details see the NuSTAR Analysis Caveats below.

Known NuSTAR Analysis Issues

These will be addressed in a future release of NuSTARDAS and NuSTAR CALDB. The NuSTARDAS module that will be updated is indicated in parentheses ():

  • (nucalcsaa) There is a known issue for processing observations using flags that affect the performance of the nucalcsaa FTOOL. When using saacalc=2 there was occasionally a memory issue on some machines (mainly Linux machines). We had addressed this problem in a patch to the NuSTARDAS that will be delivered with the next HEASoft release. A patch address this issue can be found at
  • However, during our investigation we also found that in rare cases some of the instrument housekeeping files were not being properly populated. This resulted in a crash of the nucalcsaa with error of the form:

    "nucalcsaa_0.1.4: Error: Something is wrong with the times as I cannot find housekeeping data..."

    and a crash during the nuscreen FTOOL call. In this case, Level 2 cleaned event files are not produced by the pipeline and a crash is reported in the summary at the end of the nupipeline run. The Science Operations Center and the Mission Operations Center have worked to identify these rare cases and have regenerated the housekeeping files.

    If you encounter this problem, please check to see if updated data files exist in the heasarc and/or contact the NuSTAR SOC at

NuSTAR Analysis Caveats

  • We do not recommend using XSELECT to manually generate high level products (images/spectra/light-curves) from the event files, as there are NuSTAR-specific FTOOLS that are required to accurately reproduce the livetime of the instrument. If XSELECT is used to manually filter event files, the 'nulivetime' FTOOL *must* be run in order to correct the LIVETIME keyword (XSELECT overwrites this keyword during the filtering step). However, we re-iterate that we do not recommend manually filtering data and producing high-level products in this manner as there is a significant risk of introducing some errors. Instead, if GTI filtering is required, we recommend that the users should use XSELECT to generate the GTIs and then use 'nupipeline' and 'nuproducts' with the 'usrgti' keyword to ensure that the high level products are correct.

  • We do not, in general, recommend co-adding data from the two NuSTAR telescopes for spectroscopic analyses. Simultaneously fitting the data from the two telescopes by leaving a floating cross-normalization parameter (e.g. including leading "const" model term in XSPEC) will provide more accurate results. If the "const" for FPMA is frozen to unity, then the "const" value for FPMB is typically between 0.95 and 1.05, depending on how far off-axis the source is in each telescope.

    Co-adding the effective areas of the two NuSTAR instruments without first accounting for the cross-normalization may introduce systematic errors in the model parameters. However, this should only be significant for analysis of high signal-to-noise observations when the systematic errors will be large compared to the statistical errors.

    For analysis of low signal-to-noise observations (e.g. to test for detectability), when the statistical errors will be large compared to systematic errors, or when the user wishes to combine two observations from different epochs for the same telescope, we recommend the following two methods:

    (1) For HEASoft 6.16 and earlier versions without the patch to cmprmf released on 2015/01/23, we recommend using addascaspec to combine the PHA files, background PHA files, and ARFs, and addrmf to combine the RMFs.

    (2) For HEASoft 6.16 with the patch to cmprmf applied and for HEASoft 6.17, we recommend using addspec. This will combine the source and background PHA files as well as the RMFs and ARFs.

  • NuSTARDAS v1.6.0 users should either make sure they have version 20160502 (or later) of the NuSTAR CALDB installed locally or that they are using remote access to access the latest NuSTAR CALDB from the HEASARC.

  • Analysis for bright sources: There are some post-processing filters that remove spurious triggers (a.k.a "noise events") from the NuSTAR cleaned event files. The rate of these noise events is ~2e-3 counts per second in a typical source extraction region with a diameter of 1 arcminute, though this rate varies based on which focal plane module is used and on the location on each focal plane. In the case of sources that produce moderate rates (when the incident rate on the focal plane exceeds 100 counts per second), "good" source counts can be vetoed by the noise filter. This can result in a mismatch between the measured flux between FPMA and FPMB (since they have subtly different filters) and an underestimate of the source flux.
    We recommend that for bright sources a user should produce a lightcurve with 1-second bins using nuproducts with the default energy range (3-79 keV). The resulting lightcurve should be a good estimate of the incident source rate. If any of the 1-second bins exceed 100 counts per second, we recommend re-processing the data using a modified value for the "statusexpr" keyword in nupipeline:
    in addition to any other keywords. This status expression changes the behavior of the "nufilter" FTOOL that produces the cleaned output event files (e.g. files of the form "nu10311002008A01_cl.evt"). nupipeline must be re-run from scratch for the cleaned event files to be updated. To repeat: using this modified statusexpr keyword or any variations of other status-related keywords will not affect lightcurves produced by nuproducts if the cleaned event files have not been regenerated using nupipeline.

  • Bright sources and pile-up: The NuSTAR pile-up window is so short that the instrument has only found evidence for pile-up in sources that produce incident count rates in excess of 10,000 counts per second (and then only at the few percent level). If you are working on such a bright source (i.e. the Sun and/or the flaring state of Sco X-1), please contact the NuSTAR SOC for details.

  • Analysis of observations taken after March 16th, 2020: An adjustment of the onboard laser metrology system was performed on March 17th, 2020 and analysis of observations taken on or after that date will require NuSTARDAS version 1.9.2 or later (released in HEASoft v6.27.1).

  • Use of MLI correction mtable: Calibration corrections have been released to account for a difference between FPMA and FPMB effective area seen in spectra below 8 keV in observations performed after 2015. Details about the cause of the differences between FPMA and FPMB and the MLI corrections is available in Madsen et al. (2020) arXiv:2005:00569. NuSTAR CALDB (v20200429 and later) and NuSTARDAS v1.9.2 (and later versions) are required to include these corrections. In addition, an FTOOLS mtable is available for pathological cases when there is a substantial residual difference between FPMA and FPMB seen in spectra below 5 keV. Note that in some instances, the MLI correction may be too aggressive. In these cases the old FPMA ARF should be used. See the MLI correction page on the SOC website for more details. A list of observations where use of the mtable is recommended is available on the NuSTAR SOC website.

  • Clock correction file update - October 2020: Gaps identified in spacecraft engineering housekeeping used in the generation of clock correction files have been recovered and observations effected by these gaps have been reprocessed and redelivered to the NuSTAR archive. An improved clock correction file, included with the NuSTAR CALDB 20201101 and later, is expected to provide marginally better performance than previous versions of the clock correction files. More information and a list of reprocessed observations can be found on the NuSTAR SOC website.

  • Instrumental timing signatures: All NuSTAR observations are executed in charge pump mode (CPMODE; Miyasaka et al. 2009 SPIE vol 7435). NuSTAR's detectors accumulate charges continuously due to internal (leakage) currents and other sources of electronic noise as well as the charge deposited by incident X-rays. To prevent the saturation of the readout electronics, a clock-synchronized feedback circuit removes this additional charge every 1.123 ms (spacecraft time) in a "CPMODE reset". The instrument experiences 30-40 microsec of deadtime every time this happens.

    A high-frequency pulsation search can often "detect" these resets as a strong oscillation close to 1/1.123ms = 890:5 Hz and/or possible harmonics and aliases of this frequency. The actual frequency can change slightly based on temperature; also, being in spacecraft time, it sometimes goes undetected after barycentering. It is easy to single out the harmonics, as they are exact multiples of the fundamental. The aliases are just slightly more tricky, as they can represent non-obvious reactions of any harmonics about the Nyquist frequency. They can be ruled out by changing the Nyquist frequency itself by using a light curve sampled at a different rate: if the feature does not change frequency then one can safely rule out that the feature is an alias of the 890.5 Hz frequency.

    In very early observations (prior to August 2012), there was an additional source of periodic dead time from housekeeping operations being run every 1, 4, and 8 seconds. A PDS of these early observations will typically show strong features at 0.125, 0.25, and all integer frequencies up to 30 Hz.

    Detailed information about NuSTAR observatory timing calibration can be found in Bachetti M. et al (2020) ApJ (in press) and arXiv:2009.10347.

Previous NuSTAR Analysis Issues Which Have Been Fixed

[1] means the issue was fixed in NuSTARDAS v1.3.0 (released with HEASOFT 6.15).

[2] means that the issue was fixed in CALDB release 20131007.

[3] means that the issue was fixed in CALDB release 20131223.

[4] means the issue was fixed in NuSTARDAS v1.3.1 (released with HEASOFT 6.15.1).

[5] means the issue was fixed in NuSTARDAS v1.5.1 (released with HEASOFT 6.17.0).

[6] means the issue was fixed in NuSTARDAS v1.7.1 (released with HEASOFT 6.21.0).

  • An adjustment of the NuSTAR FPM effective areas above 50 keV was needed to correct for the residuals of the W k-edge (69.5 keV) and the Pt k-edge (78.4 keV) in CALDB version 20131007 (and earlier). Analysis of very bright objects may show smooth residual features of order 5 - 15% at energies above 50 keV. This adjustment was included in the NuSTAR CALDB patch 20131223, which was released on January 17, 2014. [3]

  • As of CALDB version 20131007, the NuSTAR clock correction file for use with the FTOOL barycorr is available within the standard HEASARC CALDB releases/patches. It is also available from the NuSTAR SOC website. [2]

  • Bad/hot pixels were only approximately accounted for in the exposure maps. There is now an improved algorithm for inclusion of bad/hot pixels in exposure maps which requires new CALDB files of instrument cumulative probability maps. (nuexpomap) [1]

  • The energy dependence of the vignetting correction to PSF was not taken account of. The corrections for the dependence of the PSF on energy are at most 5%. This correction, performed in 6 energy bands (3-4.5, 4.5-6, 6-8, 8-12, 12-20, 20-79 keV), required new CALDB PSF files. (numkarf) [1]

  • There was a processor memory issue handling temporary FITS files in the extended source case when using a small value of the input parameter 'boxsize'.This was fixed. (numkarf) [1]

  • A correction for energy-dependent PSF losses in lightcurve file generation required using new CALDB PSF files. (nulccorr) [1]

  • Adjustment of the NuSTAR FPM effective areas by +15% placed NuSTAR measured fluxes between Swift and XMM measurements based on simultaneous observations of calibration targets in 2012. [2]

  • The value of the DEADC FITS keyword affected how lightcurves are displayed. The "DEADC" FITS header keyword in the NuSTAR lightcurve files (produced by nuproducts/nupipeline) contains the average livetime fraction over the course of the observation (i.e. the livetime integrated over GTIs divided by the "real" seconds integrated over the GTIs). Some standard tools for plotting lightcurves (i.e. the "lcurve" FTOOL) read the DEADC FITS header keyword value and apply this value to the displayed lightcurve. However, the nulccorr FTOOL in NuSTARDAS v1.3.0 (and previous versions) applied a deadtime correction to each individual bin in the lightcurve. Thus using lcurve to display a NuSTAR lightcurve which has been corrected by nulccorr (as is the default for nupipeline/nuproducts) resulted in a "double counted" deadtime correction in the displayed lightcurve, and as a result over-estimated the source flux in each bin. In the meantime users can work around this by either a) setting the DEADC keyword values to 1.0 in the the NuSTAR lightcurve file before using lcurve or b) using the DEADC value to rescale the lightcurve displayed in lcurve. For example:
    % lcurve 1 [1] rescale=0.92
    where is a lightcurve file generated by nuproducts (with the parameter correctlc="yes") and 0.92 is the value of the DEADC keyword in [4]
  • The pilow/pihigh input parameters in nupipeline/nuproducts define the energy band over which the lightcurve is produced but as of NuSTARDAS v1.3.0 were not applied to images or spectra. The plan is to update nupipeline and nuproducts to use the pilow/pihigh keywords for image generation also so that nuproducts/nupipeline can be used to produce images as well as lightcurves in the specified energy band. For users with NuSTARDAS v1.3.0 installed (distributed as part of the heasoft-6.15 release) we recommend using the " extractor" FTOOL or XSELECT to generate images over a given energy band. [4]

  • The PIXBIN and PERC values were missing from the exposure map FITS header. The PIXBIN and PERC parameters drive the performance of nuexpomap and should have been included in the FITS header for self-documentation purposes. [4]

  • NuSTARDAS v1.4.1 (released as part of HEASOFT 6.16) is not compatible with NuSTAR CALDB version 20131223. In particular, nupipeline will fail when trying to process data with NuSTARDAS v1.4.1 and version 20131223 of the NuSTAR caldb. NuSTARDAS v1.4.1 users should either make sure they have version 20140414 of the NuSTAR CALDB installed locally or that they are using remote access to access the latest NuSTAR CALDB from the HEASARC. [5]

  • There is a known issue for processing observations of Sun where the SAA filtering FTOOL (nucalcsaa) causes a crash when using the default parameter values for nupipeline.

    This is caused by a redundant fselect call in nucalcsaa that is run even in the case where "saamode=NONE" and "tentacle=no" (i.e., the default nupipeline parameters). For the solar observations this fselect run can cause a crash for data from FPMB because it applies a strict event screening that artificially vetoes all of the counts. This crash can be avoided by using the following option when calling nupipeline:


    to select all events with GRADE <= 26 (i.e. "science" grade events) or, more simply, with:


    to run the pipeline without any filtering on the GRADE of the event. [6]