Test
To find out how well the fitter actually works the following test was
performed.
A Gaussian distribution was generated using "fun1" with
a mean of 10, and sigma of 1.
Sets of events were then drawn from this using "hrndm1".
1000 distributions were drawn with the log10 of the number of
events varied from 1 to 3.5 in 0.25 steps (10 to 30,000 events).
Bins 0.2 sigma wide were used in each case (ie. peak bins of a 1000
event distribution contain 80 events).
Here is an
example
of what the distributions look like with 4 different numbers of events.
Each distribution was fit using "h/fit 'id' G" and "h/fit 'id' G L".
Distributions of each of the fit parameters were accumulated and
the mean and rms spread extracted.
For a Gaussian there are of course 3 parameters which PAW calls
"Const", "Mean" and "Sigma".
Results
The plot below shows the systematic and random errors in the
Gaussian parameters imposed by the sampling and re-fitting
procedure.
As expected the Log Likelihood algorithm gives lower random
error at low event statistics.
What I had not expected is that the Chi Squared fits yield
a systematically biased result for Const and Sigma even at
very high statistics!
The Log Likelihood fitter also produces biased results below 100 event samples.
The theoretical random error on mean of sigma/sqrt(N) and on sigma
of sigma/sqrt(2N) are also plotted and follows the LogL fit results
so closely they can hardly be seen.
This indicates that the LogL fitter does an optimal job.
Conclusion
The Log Likelihood switch should be used in all cases - it should
probably be the default.
Even so at low event statistics one has to worry about systematic
error introduced by the fitting procedure itself.
In cases where the LogL fitter is too slow alone a ChiSq fit can be done
first and the output values used as starting values for a LogL fit.
This ussually seems to work.