next up [*] [*]
Next: Adding models to XSPEC Up: Fitting with few counts/bin Previous: Theory

Subsections

Practice

No background

XSPEC uses a variant of Marquardt's algorithm described in §11.5 of ``Data Reduction and Error Analysis for the Physical Sciences" by Bevington. (The reader is advised that this description is designed to be read in conjunction with Bevington.) The algorithm turns on finding a matrix $\alpha_{jk}$ and a vector $\beta_k$ such that the equation :


\begin{displaymath}\beta_k = \sum_j \delta a_j \alpha_{jk}\end{displaymath}

gives sensible values of the change in parameters, $\delta a_j$, for the fitting function. Bevington §11.4 gives the derivation of $\alpha$ and $\beta$ and shows that $\beta$ is parallel to the gradient of $\chi^2$.

Now the C statistic has a gradient with respect to the parameters of the fitting function of :


\begin{displaymath}-{1\over 2}{\partial C\over\partial a_k} = \sum_i ({y_i\over y} - 1)
{\partial y \over\partial a_k} = \beta_k\end{displaymath}

So, following Bevington, expand y(xi) about y0 :


\begin{displaymath}y(x_i) = y_0(x_i) + \sum_j {\partial y_0(x_i)\over\partial a_j} \delta a_j\end{displaymath}

substitute into C and minimize with respect to the changes in the parameters :


\begin{displaymath}{\partial C\over{\partial\delta a_k}} = 2 \sum_i ({\partial y...
...l a_j}\delta a_j)^{-1}{\partial y_0(x_i)\over\partial a_k}) = 0\end{displaymath}

so to first order in the parameter changes :


\begin{displaymath}\sum_i ({y_i\over{y_0(x_i)}}-1) {\partial y_0(x_i)\over\parti...
...ver\partial a_j}{\partial y_0(x_i)\over\partial a_k}\delta a_j)\end{displaymath}

or :


\begin{displaymath}\beta_k = \sum_j \delta a_j \alpha_{jk}\end{displaymath}

where :


\begin{displaymath}\alpha_{jk} = \sum_i {y_i\over{y_0^2(x_i)}}{\partial y_0(x_i)\over\partial a_j}{\partial y_0(x_i)\over\partial a_k}\end{displaymath}

These $\alpha$ and $\beta$ then are substituted for those used in the $\chi^2$case and the algorithm works as required. Note that $\alpha_{jk}$ is $-({\partial^2 C\over{\partial a_k\partial a_j}})$ to first order in partial derivatives in y, evaluated at y0.

There is one further difference in XSPEC between the $\chi^2$and likelihood methods, which is caused by the fact that XSPEC uses an analytic formula for setting the model normalisation. In the $\chi^2$case, this means multiplying the current model by :


\begin{displaymath}(\sum_i {{y_i y(x_i)}\over{\sigma_i^2}}) / (\sum_i {{y(x_i)^2}\over{\sigma_i^2}})\end{displaymath}

where $\sigma_i$ is the error on yi. In the likelihood case the corresponding factor is :


\begin{displaymath}\sum_i y_i / \sum_i y(x_i)\end{displaymath}

With background

An analogous argument to the above can be followed through for the Wstatistic. We need the partial derivatives of W which are evaluated as follows.


\begin{displaymath}{\partial W\over\partial a_j} =
2 \sum \left\{ t_s + g_i(t_...
...{B_i g_i}{f_i} \right\} \frac{\partial y_i}{\partial\alpha_j}
\end{displaymath}


\begin{displaymath}{\partial^2 W\over{\partial a_j\partial a_k}} =
2 \sum \left...
...y_i}{\partial\alpha_j}
\frac{\partial y_i}{\partial\alpha_k}
\end{displaymath}

where


\begin{displaymath}g_i \frac{\partial y_i}{\partial\alpha_j} =
\frac{\partial ...
...y_i - N_i + B_i - d_i )
\frac{\partial y_i}{\partial\alpha_j} \end{displaymath}


\begin{displaymath}h_i \frac{\partial y_i}{\partial\alpha_k} =
\frac{\partial ...
..._s+t_b) N_i B_i}{d_i^3}
\frac{\partial y_i}{\partial\alpha_k} \end{displaymath}

Note that in this case there is no analytic formula that can be used to set the model normalization.


next up [*] [*]
Next: Adding models to XSPEC Up: Fitting with few counts/bin Previous: Theory
Ben Dorman
2003-11-28