Quickstart¶
Deploy a mock dashboard¶
In two separate terminals, execute:
scald mock
scald serve -c /path/to/example_config.yml
With the above two processes running, navigate to localhost:8080
Using the Aggregator API¶
This shows an example using the Aggregator API to store data into an InfluxDB database.
We do this by first instantiating an influx Aggregator with confiuration necessary to connect to the database.
from ligo.scald import io
aggregator = io.influx.Aggregator(hostname='influx.hostname', port=8086, db='your_database')
Now, we register a schema for the measurement we are going to store. This tells the Aggregator what type of data to expect when we feed in data.
measurement = 'my_meas'
columns = ('column1', 'column2')
column_key = 'column1'
tags = ('tag1', 'tag2')
tag_key = 'tag2'
aggregator.register_schema(measurement, columns, column_key, tags, tag_key)
Finally, we show how to store data. All data that is ingested will be downsampled to a maximum sampling rate of 1 Hz based on an aggregate quantity (min, median or max).
### option 1: store data in row form
row_1 = {'time': 1234567890, 'fields': {'column1': 1.2, 'column2': 0.3}}
row_2 = {'time': 1234567890.5, 'fields': {'column1': 0.3, 'column2': 0.4}}
row_3 = {'time': 1234567890, 'fields': {'column1': 2.3, 'column2': 1.1}}
row_4 = {'time': 1234567890.5, 'fields': {'column1': 0.1, 'column2': 2.3}}
rows = {('001', 'andrew'): [row_1, row_2], ('002', 'parce'): [row_3, row_4]}
aggregator.store_rows(measurement, rows)
### option 2: store data in column form
cols_1 = {
'time': [1234567890, 1234567890.5],
'fields': {'column1': [1.2, 0.3], 'column2': [0.3, 0.4]}
}
cols_2 = {
'time': [1234567890, 1234567890.5],
'fields': {'column1': [2.3, 0.1], 'column2': [1.1, 2.3]}
}
cols = {('001', 'andrew'): cols_1, ('002', 'parce'): cols_2}
aggregator.store_columns(measurement, cols)