Far be it from us to define a user's directory structure for them, but over the years we have found that it is convenient to have a specific setup. First a main directory is created using the ObsID number of the observation to be processed, e.g., /path/0123456789 (where /path is wherever you want to place the data on your computer). Under the main directory two subdirectories should be created, /path/0123456789/ODF for the ODF data files and /path/0123456789/analysis for all of the processing output. The ODF data for the observation should be copied to the /path/0123456789/ODF directory and uncompressed. The *.SAS file should be removed if it exists. SAS should be set up to run in the /path/0123456789/analysis directory making sure that the paths are correctly set, e.g.:

setenv SAS_CCF /path/0123456789/analysis/ccf.cif
setenv SAS_ODF /path/0123456789/odf
setenv SAS_CCFPATH /ccfpath/CCF

It is necessary to point to an existing CIF file or to explicitly run cifbuild to create it. An ODF summary file (*.SAS file) is also needed. The version of the *.SAS file that comes with the ODF will not point to the correct files, and must be replaced. This file is created by running the task odfingest. Thus, after removing the *.SAS file from the ODF directory, one uses the following two commands:

cifbuild withccfpath=no analysisdate=now
category=XMMCCF calindexset=$SAS_CCF fullpath=yes
odfingest odfdir=$SAS_ODF outdir=$SAS_ODF

These commands will produce the necessary ccf.cif file in the analysis directory and the *SUM.SAS file in the ODF directory. Note that while often it is not necessary to create a new ccf.cif file each time ODF data are processed (i.e., it is possible to have a single, up-to-date file for general use), it takes only a little time to run cifbuild and you will be assured that you are pointing to the most recent versions of the CCF files (providing you keep the CCF directory properly updated). In addition, you will have a local version of the ccf.cif file listing precisely which versions of the CCF files were used for the specific processing in an easily accessible form. This is particularly useful if the same analysis is done on the same data at two different times and the results are different!