idq.batch

idq.batch.batch(gps_start, gps_end, config_path, workflow='block', initial_lookback=3600, num_bins=1, num_segs_per_bin=1, skip_timeseries=False, skip_report=False, verbose=False, quiet=False, block=False, causal=False)[source]

launch and manage all batch processes should manage round-robin training/evaluation automatically so users really just have to “point and click”

idq.batch.calibrate(gps_start, gps_end, config_path, data_id=None, preferred=False, use_training_set=True)[source]

calibrate the classifier predictions based on historical data

idq.batch.condor_batch(gps_start, gps_end, batchdir, nickname)[source]

run the training process

idq.batch.evaluate(gps_start, gps_end, config_path, segments=None, exclude=None, data_id=None, preferred=False)[source]

evaluate the data using classifiers

idq.batch.find_record_times(dataloader, time, random_rate, samples_config, logger, data_id=None, timesreporter=None, segmentreporter=None)[source]

find truth labels (target and random times) for samples, optionally record to disk

idq.batch.load_record_dataloader(start, end, segments, config, logger, data_id=None, reporter=None, group=None)[source]

load classifier data and optionally write to disk

idq.batch.load_record_segments(start, end, config, logger, data_id=None, segments=None, exclude=None, reporter=None)[source]

load in segments and optionally write to disk

idq.batch.report(gps_start, gps_end, config_path, reportdir=None, zoom_start=None, zoom_end=None, segments=None, **kwargs)[source]

generate a report of the classifier predictions in this period we allow reportdir to be specified directly here in case you want to make a summary of a job running in a directory you don’t own

idq.batch.timeseries(gps_start, gps_end, config_path, segments=None, exclude=None, data_id=None)[source]

evaluate the data using classifiers

idq.batch.train(gps_start, gps_end, config_path, segments=None, exclude=None, data_id=None, preferred=False)[source]

run the training process