bilby_pipe.gracedb ================== .. py:module:: bilby_pipe.gracedb .. autoapi-nested-parse:: Tool for running online bilby PE using GraceDB events The functionality of much of these utility assumes the user is running on the CIT cluster, e.g. the ROQ and calibration directories are in there usual place .. !! processed by numpydoc !! Attributes ---------- .. autoapisummary:: bilby_pipe.gracedb.CHANNEL_DICTS Functions --------- .. autoapisummary:: bilby_pipe.gracedb.read_from_gracedb bilby_pipe.gracedb.download_bayestar_skymap bilby_pipe.gracedb.extract_psds_from_xml bilby_pipe.gracedb.read_from_json bilby_pipe.gracedb.calibration_lookup_o4 bilby_pipe.gracedb.calibration_lookup_o3 bilby_pipe.gracedb.calibration_lookup bilby_pipe.gracedb.calibration_dict_lookup bilby_pipe.gracedb.read_candidate bilby_pipe.gracedb._read_cbc_candidate bilby_pipe.gracedb._read_burst_candidate bilby_pipe.gracedb._read_distance_upper_bound_from_fits bilby_pipe.gracedb._get_cbc_likelihood_args bilby_pipe.gracedb._choose_phenompv2_bbh_roq bilby_pipe.gracedb._choose_bns_roq bilby_pipe.gracedb._choose_low_q_pv2_roq bilby_pipe.gracedb._choose_xphm_roq bilby_pipe.gracedb._get_cbc_likelihood_args_from_json bilby_pipe.gracedb._get_default_likelihood_args bilby_pipe.gracedb.attempt_gwpy_get bilby_pipe.gracedb.copy_and_save_data bilby_pipe.gracedb.prepare_run_configurations bilby_pipe.gracedb.create_config_file bilby_pipe.gracedb._get_default_duration bilby_pipe.gracedb._get_distance_lookup bilby_pipe.gracedb.generate_cbc_prior_from_template bilby_pipe.gracedb.generate_burst_prior_from_template bilby_pipe.gracedb.read_and_concat_data_from_kafka bilby_pipe.gracedb.create_parser bilby_pipe.gracedb.main Module Contents --------------- .. py:data:: CHANNEL_DICTS .. py:function:: read_from_gracedb(gracedb, gracedb_url, outdir) Read GraceDB events from GraceDB :Parameters: **gracedb: str** GraceDB id of event **gracedb_url: str** Service url for GraceDB events GraceDB 'https://gracedb.ligo.org/api/' (default) GraceDB-playground 'https://gracedb-playground.ligo.org/api/' **outdir: str** Output directory :Returns: event: Contains contents of GraceDB event from GraceDB, json format .. !! processed by numpydoc !! .. py:function:: download_bayestar_skymap(gracedb, gracedb_url, outdir) Download bayestar skymap from GraceDB :Parameters: **gracedb: str** GraceDB id of event **gracedb_url: str** Service url for GraceDB events GraceDB 'https://gracedb.ligo.org/api/' (default) GraceDB-playground 'https://gracedb-playground.ligo.org/api/' **outdir: str** Output directory :Returns: skymap_file: str Name of downloaded fits file .. !! processed by numpydoc !! .. py:function:: extract_psds_from_xml(coinc_file, ifos, outdir='.') .. py:function:: read_from_json(json_file) Read GraceDB events from json file :Parameters: **json_file: str** Filename of json json file output :Returns: candidate: dict Contains contents of GraceDB event from json, json format .. !! processed by numpydoc !! .. py:function:: calibration_lookup_o4(trigger_time, detector) Lookup function for the relevant calibration file for O4 data Assumes that it is running on CIT where the calibration files are stored under /home/cal/public_html/archive for the LIGO instruments and a uniform in magnitude, time, and phase calibration envelope for Virgo. :Parameters: **trigger_time: float** The trigger time of interest **detector: str [H1, L1, V1]** Detector string :Returns: filepath: str The path to the relevant calibration envelope file. If no calibration file can be determined, None is returned. .. rubric:: Notes We search the available estimates in reverse chronological order and take the closest available estimate prior to the specified trigger time. We only look for v0 calibration uncertainty and may not be the best estimate for offline analyses. The calibration archive sometimes contains directories for epochs that don't contain a usable uncertainty and so those directories are ignored. .. !! processed by numpydoc !! .. py:function:: calibration_lookup_o3(trigger_time, detector) Lookup function for the relevant calibration file for O3 data Assumes that it is running on CIT where the calibration files are stored under /home/cbc/pe/O3/calibrationenvelopes :Parameters: **trigger_time: float** The trigger time of interest **detector: str [H1, L1, V1]** Detector string :Returns: filepath: str The path to the relevant calibration envelope file. If no calibration file can be determined, None is returned. .. !! processed by numpydoc !! .. py:function:: calibration_lookup(trigger_time, detector) Lookup function for the relevant calibration file. This is a wrapper to the O3 and O4 specific functions. :Parameters: **trigger_time: float** The trigger time of interest **detector: str [H1, L1, V1]** Detector string :Returns: filepath: str The path to the relevant calibration envelope file. If no calibration file can be determined, None is returned. .. !! processed by numpydoc !! .. py:function:: calibration_dict_lookup(trigger_time, detectors) Dictionary lookup function for the relevant calibration files :Parameters: **trigger_time: float** The trigger time of interest **detectors: list** List of detector string :Returns: calibration_model, calibration_dict: str, dict Calibration model string and dictionary of paths to the relevant calibration envelope file. .. !! processed by numpydoc !! .. py:function:: read_candidate(candidate) Read a gracedb candidate json dictionary .. !! processed by numpydoc !! .. py:function:: _read_cbc_candidate(candidate) .. py:function:: _read_burst_candidate(candidate) .. py:function:: _read_distance_upper_bound_from_fits(filename, level=0.95) Read skymap fits file and return the credible upper bound of distance. If ligo.skymap is not installed, this returns None. :Parameters: **filename: str** .. **level: float** .. :Returns: upper_bound: float .. .. !! processed by numpydoc !! .. py:function:: _get_cbc_likelihood_args(mode, trigger_values) Return cbc likelihood arguments and quantities characterizing likelihood :Parameters: **mode: str** .. **chirp_mass: float** .. :Returns: likelihood_args: dict .. likelihood_parameter_bounds: dict bounds of parameter space where likelihood is expected to be accurate minimum_frequency: float minimum frequency of likelihood integration maximum_frequency: float maximum frequency of likelihood integration duration: float inverse of frequency interval of likelihood integration .. !! processed by numpydoc !! .. py:function:: _choose_phenompv2_bbh_roq(chirp_mass, ignore_no_params=False) Choose an appropriate PhenomPv2 ROQ folder, and return likelihood arguments and quantities characterizing likelihood. The bases were developed in the work of arXiv:1604.08253. For a high-mass trigger with chirp mass above 35 solar mass, this returns arguments with the standard likelihood `GravitationalWaveTransient`, as the analysis is computationally cheap anyway. :Parameters: **chirp_mass: float** .. **ignore_no_params: bool** If True, this ignores FileNotFoundError raised when roq params file is not found, which is useful for testing this command outside the CIT cluster. :Returns: likelihood_args: dict .. likelihood_parameter_bounds: dict bounds of parameter space where likelihood is expected to be accurate minimum_frequency: float minimum frequency of likelihood integration maximum_frequency: float maximum frequency of likelihood integration duration: float inverse of frequency interval of likelihood integration .. !! processed by numpydoc !! .. py:function:: _choose_bns_roq(chirp_mass, mode) Choose an appropriate BNS-mass ROQ basis file, and return likelihood arguments and quantities characterizing likelihood. The review information of those bases are found at https://git.ligo.org/pe/O4/review_bns_roq/-/wikis. :Parameters: **chirp_mass: float** .. **mode: str** Allowed options are "lowspin_phenomd_narrowmc_roq", "lowspin_phenomd_broadmc_roq", "phenompv2_bns_roq", and "phenompv2nrtidalv2_roq". :Returns: likelihood_args: dict .. likelihood_parameter_bounds: dict bounds of parameter space where likelihood is expected to be accurate minimum_frequency: float minimum frequency of likelihood integration maximum_frequency: float maximum frequency of likelihood integration duration: float inverse of frequency interval of likelihood integration .. !! processed by numpydoc !! .. py:function:: _choose_low_q_pv2_roq(chirp_mass) Choose an appropriate low-mass-ratio IMRPhenomPv2 ROQ basis file and return likelihood arguments and quantities characterizing likelihood. :Parameters: **chirp_mass: float** .. :Returns: likelihood_args: dict .. likelihood_parameter_bounds: dict bounds of parameter space where likelihood is expected to be accurate minimum_frequency: float minimum frequency of likelihood integration maximum_frequency: float maximum frequency of likelihood integration duration: float inverse of frequency interval of likelihood integration .. !! processed by numpydoc !! .. py:function:: _choose_xphm_roq(chirp_mass) Choose an appropriate IMRPhenomXPHM ROQ basis file and return likelihood arguments and quantities characterizing likelihood. :Parameters: **chirp_mass: float** .. :Returns: likelihood_args: dict .. likelihood_parameter_bounds: dict bounds of parameter space where likelihood is expected to be accurate minimum_frequency: float minimum frequency of likelihood integration maximum_frequency: float maximum frequency of likelihood integration duration: float inverse of frequency interval of likelihood integration .. !! processed by numpydoc !! .. py:function:: _get_cbc_likelihood_args_from_json(filename, trigger_values) Load input JSON file containing likelihood settings and determine appropriate likelihood arguments and parameter bounds depending on input trigger values. The json file is supposed to contain `likelihood_args`, `likelihood_parameter_bounds`, and/or `trigger_dependent`. The first two contain default arguments and parameter bounds respectively. The last item contains trigger-dependent settings to update the default settings. It contains `range`, `likelihood_args`, and/or `likelihood_parameter_bounds`. `range` contains dictionary of trigger-parameter ranges, whose keys are parameter names (`chirp_mass`, `mass_ratio`, `spin_1z`, and/or `spin_2z`) and values are lists of their ranges. `likelihood_args` contains lists of arguments, one of which is chosen depending on trigger values and used to update the default likelihood arguments. `likelihood_parameter_bounds` contains lists of parameter bounds to update their default. :Parameters: **filename: str** .. **trigger_values: dict** .. :Returns: likelihood_args: dict .. likelihood_parameter_bounds: dict bounds of parameter space where likelihood is expected to be accurate minimum_frequency: float minimum frequency of likelihood integration maximum_frequency: float maximum frequency of likelihood integration duration: float inverse of frequency interval of likelihood integration .. !! processed by numpydoc !! .. py:function:: _get_default_likelihood_args(trigger_values) .. py:function:: attempt_gwpy_get(channel, start_time, end_time, n_attempts=5) .. py:function:: copy_and_save_data(ifos, start_time, end_time, channel_dict, outdir, gracedbid, query_kafka=True, n_attempts=5, llhoft_glob='/dev/shm/kafka/{detector}/*.gwf') Attempt to read the strain data from internal servers and save frame files to the run directory. If `query_kafka` is True, then attempt to fetch the data from `/dev/shm/kafka/` (preferred method for low-latency/online analyses). If `query_kafka` is False or the data cannot be found in `/dev/shm/kafka/`, this function will attempt to get the data from TimeSeries.get(), called in a loop to allow for multiple attempts. If data cannot be found for all the ifos, returns None, and data reading will be attempted by the bilby_pipe data_generation stage. :Parameters: **ifos: list** List of ifos for this trigger **start_time: float** Safe start time for data segment **end_time: float** Safe end time for data segment **channel_dict: dict** Dictionary of channel names **outdir: str** Directory to save frame files **gracedbid: str** GraceDB id of event **query_kafka: bool** Whether to attempt to copy frame files from `/dev/shm/kafka/` **n_attempts: int** Number of attempts to call TimeSeries.get() before failing to obtain data **llhoft_glob: str** The per-detector string to glob for low latency frames :Returns: data_dict: dict Dictionary with {ifo: path_to_copied_frame_file}. None, if data were not able to be obtained for all ifos. .. !! processed by numpydoc !! .. py:function:: prepare_run_configurations(candidate, gracedb, outdir, channel_dict, sampler_kwargs, webdir, search_type='cbc', cbc_likelihood_mode='phenompv2_bbh_roq', settings=None, psd_cut=0.95, query_kafka=True, llhoft_glob='/dev/shm/kafka/{detector}/*.gwf', recommended_distance_max=None) Creates ini file from defaults and candidate contents :Parameters: **candidate:** Contains contents of GraceDB event **gracedb: str** GraceDB id of event **outdir: str** Output directory where the ini file and all output is written **channel_dict: dict** Dictionary of channel names **sampler_kwargs: str** Set of sampler arguments, or option for set of sampler arguments **webdir: str** Directory to store summary pages **search_type: str** What kind of search identified the trigger, options are "cbc" and "burst" **cbc_likelihood_mode: str** Built-in CBC likelihood mode or path to a JSON file containing likelihood settings. The built-in settings include 'phenompv2_bbh_roq', 'phenompv2_bns_roq', 'phenompv2nrtidalv2_roq', 'lowspin_phenomd_narrowmc_roq', 'lowspin_phenomd_broadmc_roq', and 'test'. **settings: str** JSON filename containing settings to override the defaults **psd_cut: float** Fractional maximum frequency cutoff relative to the maximum frequency of pipeline psd **query_kafka: bool** Whether to first attempt to query the kafka directory for data before attempting a call to gwpy TimeSeries.get() **llhoft_glob: str** The per-detector string to glob for low latency frames **recommended_distance_max: float** Recommended prior maximum of luminosity distance in unit of Mpc. If it is None, the maximum falls back to the default value. :Returns: filename: str Generated ini filename .. !! processed by numpydoc !! .. py:function:: create_config_file(candidate, gracedb, outdir, channel_dict, sampler_kwargs, webdir, search_type='cbc', cbc_likelihood_mode='phenompv2_bbh_roq', settings=None, psd_cut=0.95, query_kafka=True, llhoft_glob='/dev/shm/kafka/{detector}/*.gwf') .. py:function:: _get_default_duration(chirp_mass) Return default duration based on chirp mass :Parameters: **chirp_mass: float** .. :Returns: duration: float .. .. !! processed by numpydoc !! .. py:function:: _get_distance_lookup(chirp_mass, phase_marginalization=True) Return appropriate distance bounds and lookup table :Parameters: **chirp_mass: float** .. **phase_marginalization: bool (optional, default is True)** .. :Returns: distance_bounds: tuple .. lookup_table: str .. .. !! processed by numpydoc !! .. py:function:: generate_cbc_prior_from_template(chirp_mass, likelihood_parameter_bounds, outdir, fast_test=False, phase_marginalization=True, recommended_distance_max=None) Generate a cbc prior file from a template and write it to file. This returns the paths to the prior file and the corresponding distance look-up table :Parameters: **chirp_mass: float** .. **likelihood_parameter_bounds: dict** .. **outdir: str** .. **fast_test: bool (optional, default is False)** .. **phase_marginalization: bool (optional, default is True)** .. **recommended_distance_max: float (default: None)** .. :Returns: prior_file: str .. lookup_table: str .. .. !! processed by numpydoc !! .. py:function:: generate_burst_prior_from_template(minimum_frequency, maximum_frequency, outdir, template=None) Generate a prior file from a template and write it to file :Parameters: **minimum_frequency: float** Minimum frequency for prior **maximum_frequency: float** Maximum frequency for prior **outdir: str** Path to the outdir (the prior is written to outdir/online.prior) **template: str** Alternative template file to use, otherwise the data_files/roq.prior.template file is used .. !! processed by numpydoc !! .. py:function:: read_and_concat_data_from_kafka(ifo, start, end, channel, llhoft_glob='/dev/shm/kafka/{detector}/*.gwf') Query the kafka directory for the gwf files with the desired data. Start and end should be set wide enough to include the entire duration. This will read in the individual gwf files and concatenate them into a single gwpy timeseries .. !! processed by numpydoc !! .. py:function:: create_parser() .. py:function:: main(args=None, unknown_args=None)