Version 1.2.8  

Bayesian Blocks Examples


Gregory & Loredo Code for Timing Data

The examples presented here were created with the example script found on the SITAR downloads page, using the same data file considered in the Bayesian Blocks example. To reproduce the results below, save the example script as and the data file as ps.evt.gz. (You do not need to uncompress it.) The script then runs as:
   unix%> isis -g
    Solar Abundance Vector set to angr:  Anders E. & Grevesse N. Geochimica et Cosmochimica Acta 53, 197 (1989)
    Cross Section Table set to bcmc:  Balucinska-Church and McCammon, 1998

   Welcome to ISIS Version 1.6.1-39
   Copyright (C) 1998-2011 Massachusetts Institute of Technology

             Isis web page:
      Mailing list archive:
    Send questions to the mailing list: <>.
        For a summary of recent changes, type:  "help changes"

   isis> () = evalfile("");
and that's it!

The output of this script is shown here:

4U 2129+47 Gregory & Loredo Fit

The basic calling sequence for the function is very straightforward. If one has a list of event times, t, the code can be called as simply as:

   isis> gl = sitar_glvary(t-t[0]);
(Note that I have subtracted off a zero-point for the time. Chandra times are in seconds since a reference date, some decade back, and thus have an offset >3.e8. Retaining this number can lead to some numerical roundoff issues, so it is best to subtract it when using the code.) The return value above is a structure variable containing many pieces of information. Chief among these are: gl.p=1 indicates that it is virtually certain that this is a varying lightcurve, and not a constant lightcurve. gl.mpeak=23 indicates that the most highly significant partitioning is breaking the lightcurve into 23 even pieces, of (gl.tmax-gl.tmin)/23 = 1591.82 sec each. 1591.82 seconds is about the duration of the eclipse, so this is a natural partitioning of the lightcurve. gl.tlc, gl.rate, and gl.erate contain the lightcurve estimate used to create the above plot.

Note that the above lightcurve does not statistically go to 0 in the eclipses, even though in reality the lightcurve does go through a total eclipse. This is for two major reasons, both related to how the lightcurve estimate is obtained from the algorithm.

The lightcurve is a weighted average of all the partitionings, ranging from a single, uniform rate to a heavily subdivided lightcurve. The former partitioning (and in fact all partitionings where a bin covers durations longer than the eclipse) will have flux during the eclipse bin. These binnings are not heavily weighted in the estimated lightcurve, but they are still present.

The binning that comes closest to the duration of the eclipse is 23 partitions. However, this binning is still slightly longer than the eclipse duration, and thus includes flux within the eclipse bin. In this particular case, an even binning doesn't completely isolate the eclipse. (The Bayesian Block algorithm would in fact do a better job of isolating the eclipse from the rest of the lightcurve.)

Also, in our implementation of the algorithm, we have not included a search and marginalization over phase. The even partitions start at tmin and end at tmax. Allowing for marginalization over starting/stopping points for the breaks would undoubtedly center the partitions more carefully on the eclipses, and leave less residual flux in the "most favored" partitionings.

The overall character of the lightcurve, however, is well-captured. Although the Gregory & Loredo algorithm has not fully captured the depth of the eclipse, it has done a good job at capturing its width, and has done a better job than the Bayesian Blocks algorithm of capturing the out-of-eclipse quasi-sinusoidal modulation of the lightcurve.

This page was last updated Oct 21, 2020 by Michael Nowak. To comment on it or the material presented here, send email to
Valid HTML 4.01! Made with JED. Viewable With Any Browser.
MIT Accessibility