|
|
|
Bayesian Blocks Functions
Based upon the algorithms and MATLAB routines of Jeffrey Scargle, these
routines take an event list (i.e., an ordered list of the times of
detected events), and create a piecewise-constant rate representation
of the lightcurve. That is, the lightcurve is represented as a series
of contiguous "blocks" within which the event rate is modeled as a
constant. The determination of the location of the "change points"
(i.e., the boundaries between the blocks) and the rates within the
blocks is made via a Bayesian analysis (assuming Poisson
statistics). A user-selectable parameter assigns the prior probability
for the number of blocks, which is roughly equivalent to assigning a
significance level threshhold for each detected block. (The
simple `frequentist' interpretation of this parameter is best used for
guiding, for example, Monte Carlo simulations to gauge the true
significance level.)
A description of the algorithm can be found in:
"Studies in Astronomical Time Series Analysis. VI.
Optimum Partition of the Interval: Bayesian Blocks,
Histograms, and Triggers", by J.D. Scargle et al., 2003 (in preparation)
Two routines must be called by the user. The event times are read by
sitar_make_data_cells, which then creates the input (either as
a series of event "cells", or bins for a binned-mode) for
sitar_global_optimum. The latter outputs the locations of the
statistically significant "change points" as well as the rates,
counts, and associated errors within each block. These routines are described below.
- sitar_make_data_cells ( tt, type, max_delt, frame, tstart, tstop )
Assign events to "cells" described by the cell population and size.
Run as: isis> cell = sitar_make_data_cells(tt,type,max_delt,frame,tstart,tstop);
Inputs:
-
tt : The times of the detected events
-
type : Describe cells as "events" (type=1
or 2), with the size (i.e., normalized duration) being the
distance from halfway from the previous event to halfway to the next
event (type=1), or from the current event to right before the
subsequent event (type=2). (Note: all cell populations are
greater than or equal to one, and the total number of cells is less
than or equal to the total number of events.) Alternatively, events
can be assigned to bins of uniform size (type=3). (For this
case, cells can have zero population, and the total number of cells
can greatly exceed the total number of events.)
-
max_delt : Events closer together than
max_delt are grouped together in a single cell. (Note: this
is for grouping of consecutive events, such that the resulting
cell duration can be greater than max_delt.)
-
frame : The output cell sizes are in units of
frame, or this is the bin size for type=3.
-
tstart, tstop : The start and stop
times for creating the output cells.
Outputs:
-
cell : A structure with fields:
cell.pops - an array of the number of events per cell;
cell.size - the size (i.e. duration) of a cell in units of
frame or the mean counts per bin (derived from
the total lightcurve) in binned mode (type=3);
cell.lo_t,.hi_t - the start and stop times of the cell;
cell.dtcor - an array, currently set to unity, for
storing "dead time corrections" (or "efficiency" corrections) of
the cells. Must be changed externally to this subroutine, and
currently only used in sitar_global_optimum to deadtime
correct the output rates of the resulting blocks.
- sitar_global_optimum( cell, ncp_prior, ev_type
[; first, mean_prior, rate_prior, rate_low=#, rate_high=#, alpha=#, beta=#] )
Find the maximum likelihood location of the change points.
Run as: isis> results = sitar_global_optimum(cell,ncp_prior,type);
Inputs:
-
cell : A structure, with cell.pops, cell.size, cell.lo_t
cell.hi_t,cell.dtcor (see sitar_make_data_cells)
-
ncp_prior : Parameter for prior on number of 'blocks'
-
type : Identical to type from sitar_make_data_cells
Optional Qualifiers:
-
first : If set, the code runs in 'trigger' mode, and returns upon
the first sign of a change, so long as it is greater
than 'first' cells from the beginning of the
lightcurve
event mode priors (type=1 or 2)
- Default prior is that p_1, the probability of one or
more photons in a frame, is uniformly distributed
from 0->1.
-
alpha : >=0 implies p_1 prior is (1+alpha) (1-p_1)^alpha
-
mean_prior : if set, prior on *rate* is exp(-Lambda/Lambda_0),
where \Lambda_0 is the mean rate from the entire
observation. Supercedes any choice on alpha.
-
rate_prior : if set, prior is chosen to be uniform between
rates ranging from rate_low -- rate_high, with
defaults of rate_low = 1/3 mean rate and
rate_high = 3 times mean rate. Supercedes any
choice of alpha or mean_prior.
binned mode priors and posteriors (type=3)
-
alpha, beta : The prior on the bin rate (sort of) goes as
(Lambda)^(alpha-1) Exp(-beta*Lambda), where
Lambda=rate, and the default is alpha=beta=1
-
max_like : Use maximum likelihood, log(Pmax) = N log(N/T) -N,
as the posterior test function. Overrides any
choice of alpha & beta for binned mode.
Outputs:
-
results.cpt : Array of change point locations for the maximum
likelihood solution (indices are specific to the
cell input);
-
results.last, .best : (Diagnostic purposes only) Arrays of the
location of the last change point and the
associated maximum log probability
-
results.cts : Counts in each block
-
results.rate : Rate in each block
-
results.err : Poisson error for the block rate
-
results.lo_t,.hi_t : | Times of lower (>=) and upper (<) block
boundaries.
|