Options: <F>orm,<S>lope,<N>umber of lines,<L>ine energies,<D>ata bins, <B>reak energy,<I>terations,<A>utonorm,<E>norm,<H>elp,<Q>uit options.You can now alter the form of the model that was established in the default file.

The form of the spectrum is a power-law, either straight

or broken

In addition, you can have from 0 to 3 delta-function lines:

The delta-function assumption means that the line widths are assumed to be much less than EGRET's energy resolution. You have to specify the break and line energies; the program searches for the best values of the other parameters. In all, then, there are from 2 to 6 free parameters in the model.

There are a few other options available here. The program asks for guesses on the power-law slopes; these are used as starting points for the fit optimization process. One can choose the data bins that will be used in the analysis. This might be useful if the background is swamping the source in a given bin, for example, or if the extrapolations of the instrument functions to the low and high energy ends are suspect. You can control the ``normalization energy'', the energy at which the power-law intensity is specified. When the ``autonorm'' is on, and a straight power-law is used, the program adjusts the normalization energy in an attempt to minimize the correlation between the slope and intensity of the power-law (although this doesn't work right now if lines are used). If the autonorm is on and the model is a broken power-law, the intensity is specified at the break energy. If the autonorm is off, the normalization energy is fixed at whatever energy you want.

The final option is the number of iterations. The fit is done with
an iterative weighted least-squares method; the weights are the
expected variances of the bin counts. For a Poisson
process, the variance is just the square root of the mean number
of counts. Unfortunately, we don't know the mean number of
counts. One standard approach is to use the *measured* number
of counts in a bin. However, this can produce biases in the
power-law slopes; hence the iteration. On each iteration beyond
the initial (zeroth) one, the previous best fit is used as the
weight for the new fit. If you are just trying out different
models, you don't need to use multiple iterations, but if you are
going for accurate results the number of iterations should be high
enough so that the fit values settle down to constant values. (In
our experience it always does.) One important caveat: the only
reliable value of
is that resulting from the zeroth
iteration, in the sense that this is the most useful value for
discriminating between good and bad fits. If the fit of the model to
the data is poor, the iterated weights might be far from the true mean
bin counts, and hence the value obtained might be obligingly low or
unrealistically high. Even the value from the zeroth iteration cannot
be assumed to follow a standard
distribution, since the bin
counts follow a Poisson and not a Gaussian distribution. However, it
is pretty close; in particular, the mean is usually around 1, and
values much higher than 1 indicate a bad fit.
Multiple iterations are useful only for pulsar analysis; for a DC source
all iterations will give exactly the same result.