Time Series Database TSDB
Introduction
TSDB is MODE's proprietary time series database. TSDB handles high-frequency time series data which can be difficult for ordinary databases to handle. High-speed query function enables interactive manipulation of data. Within MODE, TSDB can be enabled as a Smart Module to your project.
A TSDB Smart Module is required and used in all Sensor Cloud projects to store and process data.
In this section, we will walk through the setup of TSDB in the MODE Developer Console as well as the ways you can send and retrieve data.
Additionally, You can also reference TSDB's API here.
Features
Two Types of Time Series Data
TSDB supports two types of time series data — Simple Time Series and Time Series Collection.
Simple Time Series is a one-dimensional series of values which are associated with timestamps. Each Simple Time Series is identified by a series ID.

On the other hand, Time Series Collection associates multiple values with each single timestamp.
Collection is a set of key and numerical value pairs.
LikeSimple Time Series, each Time Series Collection is also identified by a unique identifier which is called collection ID
.
Additionally, when you send a Time Series Collection data point, you can add additional information as Tags.
A Tag is a key-value pair, with the value being an arbitrary string.

Time Bucket Resolution Adjustment
When you work with time series data, you may want to retrieve data within various time ranges. The bigger the time range is, the more data points there are. It will be less useful and less convenient to see all raw data points over a large time range. It's more convenient to see the data points which are appropriately summarized in an appropriate resolution. When you retrieve time series data from its API, you can specify the resolution that you want. Or if you don't specify it, the MODE TSDB Smart Module will automatically decide the appropriate resolution based on the time range with which you query the time series data.
Monitoring Time Series
You can set up monitoring conditions against specific time series. For example, you can set a monitor on a temperature time series so that alerts are sent when the value exceeds 35 degree Celsius consecutively over a period of time.
A monitor will generate "incidents" There are two types of incidents, Alert incident and Recovery incident, via MODE Events.
Incidents are represented as the specific type of events;
_mod-tsdb-monitorAlertIncidentInvoked_
is for Alert incident and
_mod-tsdb-monitorRecoveryIncidentInvoked_
is for Recovery incident respectively.
Note: monitoring feature is only available for Simple Time Series currently.
Setting up TSDB Smart Module
ONE | In the Developer Console, navigate to the Smart Modules page and select +NEW. Click +ADD on the section for TSDB.

TWO | Fill out the "New Time Series DB Module" form, then click Save.

- Module ID: Uniquely identifiable name used for getting access to the data in SDS from the REST API.
- Description: Description for the new Smart Module.
- Subscribe Events: (optional) Add the type of Event that you will send with time series data format ( described below ). If you use "*", this TSDB smart module is subscribed to all events.
- Bulk Data Label: (optional) Uniquely identifiable name used for uploading the data by MQTT.
You need to specify either Subscribe Events
field or Bulk Data Label
field.
When you specify "Subscribe Events", it is important to send only time series events to the TSDB Smart Module. Events that are not in the time series data format will be rejected by the TSDB Smart Module and will incur unnecessary workload on the system.
ALL DONE! | Your TSDB Smart Module is now set up and ready to receive data.
Collecting Times Series Data
In this section, we will explain two ways of sending time series data, Bulk Data and Events, both of which use MQTT connections with MODE Platform. Therefore, please see How to use MQTT with MODE first if you are not familiar.
The difference between those two methods is the efficiency of the data transfer. While Bulk Data is more efficient in data transfer size, the data format is more complicated. On the other hand, sending time series data in the form of Events is much simpler and easy to implement.
And make sure your device is properly provisioned, you will need a device set up and attached to a home to send an event to a TSDB module.
Sending data points as Bulk Data
To send Bulk Data, you need to specify Bulk Data Label in the Developer Console for your TSDB Smart Module.

You should use this Bulk Data Label in the MQTT topic name for publishing your messages.
Topic Name should be like /devices/DEVICE_ID/bulkData/BULK_DATA_LABEL
.
If the Device ID is 1234 and Bulk Data Label is "sensor_cloud",
the topic is /devices/1234/bulkData/sensor_cloud
, for example.
Bulk Data is a MQTT message payload. Bulk Data message payload to be processed by the Time Series Database must be in MessagePack encoding. The data format depends on the type of time series data you are sending.
When you are sending Simple Time Series data, the message should be in the following format:

Immediately after the “DATA” tag is the schema version string. Currently it must be “1.0”. It is followed by “numSeriesIds”, which is the number of time series contained in this message. It must be followed by the exact number of series IDs.
After that here should be an arbitrary number of time series sets. Each time series set comprises a timestamp (an unsigned integer that is the Unix Epoch time in nanoseconds) and a list of float64 values. The number of float64 values should match exactly the number of “numSeriesIds”. The float64 values are ordered and the position of a value corresponds to the series ID at the same position as in the schema.
Note if the data points in two time series don’t have matching timestamps, it is best to send the time series in separate “DATA” messages.
Meanwhile, when you are sending Time Series Collection, the data format should be the following:

Immediately after the "COLLECTION" tag is the schema version string. Currently it must be "1.0". It is followed by:
- "collectionId" - a string that uniquely identifies the collection.
- "numValues", which is the number of numerical values in each record. It must be followed by the exact number of value names.
- "numTags", which is the number of tags in each record. It must be followed by the exact number of tag names.
After that there should be an arbitrary number of data records. Each record has a timestamp (a signed integer that is the Unix Epoch time in nanoseconds). It is followed by float64 values. The number of values should match exactly the number of numValues, and correspond to the value names at the same position as in the schema. The values are followed by tags. And just like the values, the tags must correspond strictly to the tag names specified in the schema.
The following example code shows how Bulk Data for TSDB is formatted in Go
(using the package github.com/vmihailenco/msgpack
for MessagePack encoding).
Example code in Go
type SimpleTimeSeriesDataSet struct {
Timestamp time.Time
Values []float64
}
func createSimpleTimeSeriesBulkData(seriesIDs []string, dss []SimpleTimeSeriesDataSet) ([]byte, error) {
var buf bytes.Buffer
enc := msgpack.NewEncoder(&buf)
if err := enc.EncodeString("DATA"); err != nil {
return nil, err
}
if err := enc.EncodeString("1.0"); err != nil {
return nil, err
}
if err := enc.EncodeUint64(uint64(len(seriesIDs))); err != nil {
return nil, err
}
for _, seriesID := range seriesIDs {
if err := enc.EncodeString(seriesID); err != nil {
return nil, err
}
}
for _, ds := range dss {
if len(ds.Values) != len(seriesIDs) {
return nil, errors.New("number of seriesID and number of values should be same")
}
if err := enc.EncodeInt64(ds.Timestamp.UnixNano()); err != nil {
return nil, err
}
for _, v := range ds.Values {
if err := enc.EncodeFloat64(v); err != nil {
return nil, err
}
}
}
return buf.Bytes(), nil
}
type TimeSeriesCollectionDataSet struct {
Timestamp time.Time
Tags []string
Values []float64
}
func createTimeSeriesCollectionBulkData(collectionID string, valueNames []string, tagNames []string, dss []TimeSeriesCollectionDataSet) ([]byte, error) {
var buf bytes.Buffer
enc := msgpack.NewEncoder(&buf)
if err := enc.EncodeString("DATA"); err != nil {
return nil, err
}
if err := enc.EncodeString("1.0"); err != nil {
return nil, err
}
if err := enc.EncodeString(collectionID); err != nil {
return nil, err
}
if err := enc.EncodeUint64(uint64(len(valueNames))); err != nil {
return nil, err
}
for _, valueName := range valueNames {
if err := enc.EncodeString(valueName); err != nil {
return nil, err
}
}
if err := enc.EncodeUint64(uint64(len(tagNames))); err != nil {
return nil, err
}
for _, tagName := range tagNames {
if err := enc.EncodeString(tagName); err != nil {
return nil, err
}
}
for _, ds := range dss {
if err := enc.EncodeInt64(ds.Timestamp.UnixNano()); err != nil {
return nil, err
}
for _, v := range ds.Values {
if err := enc.EncodeFloat64(v); err != nil {
return nil, err
}
}
}
return buf.Bytes(), nil
}
Sending data points as Events
Besides Bulk Data, you can send time series data in the form of Events over MQTT. If you are not familiar with sending Events by MQTT, please see How to use MQTT with MODE first.
You need to configure TSDB Smart Module properly so that it can process the Events with which your devices send time series data.
You need to configure Subscribed Events in the Console:

You should use that value as Event Type in the Event JSON data that you send:
Event JSON example
{
"eventType": "timeSeriesData",
"eventData": { ... }
}
As in Bulk Data, the format of eventData
differs by the type of the time series data you are sending.
The following JSON is an example of Simple Time Series data:
Event JSON example
{
"eventType": "timeSeriesData",
"eventData": {
"timeSeriesData": [
{
"timestamp": "2017-02-01T12:00:00.123Z",
"seriesId": "sensor01",
"value": 1
},
{
"timestamp": "2017-02-01T12:01:20.456Z",
"seriesId": "sensor01",
"value": 10
}
]
}
}
The event data must contain the key timeSeriesData
with an array of time series data points as its value.
Each data point is a JSON object with the following fields:
- FieldTypeDescription
- timestampstringEntry must be formatted as an RFC3339 date/time. You may specify the timestamp down to the millisecond. Note that if the data point has a timestamp that is not "current", it may not be immediately available for retrieval after injection.
- seriesIdstringSpecifies the time series ID of the data to be stored. The entry must have a string which only contains upper and lower case alphabets, numbers, underscores ("\_"), hyphens ("-") and colons (":"). Series ID that contains any other characters may not be stored in TSDB.
- valuenumberSpecifies the value to be stored. The value must be a valid JSON number (64-bit floating point number).
In the example, both data points have the same seriesId (sensor01
) but you can set different seriesId in each event datum if you want.
On the other hand, the following JSON is an example of Time Series Collection data.
Event JSON example
{
"eventType": "tsdb1",
"eventData": {
"collectionId": "sensor-xyz",
"valueNames": ["loi", "tem", "bri"],
"tagNames": ["pid", "diag"],
"records": [
{
"timestamp": "2021-04-10T01:12:22Z",
"values": [89.8, 77, 9],
"tags": ["5DEF12AA3B", "100"]
},
{
"timestamp": "2021-04-10T01:12:32Z",
"values": [87.2, 77, 8.33],
"tags": ["5DEF12AA3B", "102"]
},
{
"timestamp": "2021-04-10T01:12:43Z",
"values": [87.5, 75, 7.92],
"tags": ["5DEF12AA3B", "102"]
},
{
"timestamp": "2021-04-10T01:12:52Z",
"values": [90.3, 75, 6.01],
"tags": ["5DEF12AA3B", "100"]
}
]
}
}
The eventData
object should be in the following format:
collectionId
: stringvalueNames
: an array of stringtagNames
: an array of stringrecords
: an array of Data Record objects.
Each Data Record object corresponds to a set of time series values to be recorded for a given point in time. It must have the following fields:
timestamp
: time in RFC3339 formatvalues
: an array of numberstags
: an array of strings
The order of the items in valueNames
and values
is important.
The position of a value in the array implies which attribute the value corresponds to.
Similarly, the items in tags
must correspond to the items in tagNames
.
An Important Note Regarding Data Availability
If multiple data points with the same seriesId
and timestamp
are delivered to a TSDB Smart Module,
the last data point delivered to TSDB is saved. TSDB guarantees idempotency.
Getting information about a time series
You can get information about the time series themselves. There are two type of endpoints corresponding to the two types of time series, Simple Time Series and Time Series Collection:
Endpoint for Simple Time Series information
Response Body JSON example
{
"id": "room1-temperature",
"homeId": 2233,
"moduleId": "sensor_data",
"timeZone": "Europe/Berlin"
}
Endpoint for Time Series Collection information
Response Body JSON example
{
"id": "room1-metrics",
"homeId": 2233,
"moduleId": "sensor_data",
"timeZone": "Europe/Berlin",
"valueNames": ["temperature", "humidity", "pressure"],
"tagNames": ["vendor_id", "occupant"]
}
An important piece of information which is common for those two types of time series is timeZone
. This impacts how
daily aggregation is done.
Meanwhile, valueNames
and tagNames
are present only in Time Series Collections.
Retrieving time series data
After time series data are stored in TSDB, your applications may want to query the stored data to show graphs or otherwise make use of the data. We will explain several ways to retrieve time series data in this section.
Query data between two timestamps
To retrieve Simple Time Series data, you need to query the following endpoint:
Endpoint for querying Simple Time Series data
If you want to see the data in a certain time range, you need to specify the begin
and end
query parameters.
The following query is an example (613 corresponds to :homeId, tsdb corresponds to :moduleId
, and "sensor01" corresponds to :seriesId
in this example):
Example query to retrieve Simple Time Series aggregated data in a time range
It may respond following results:
Response body JSON example
{
"aggregation": "avg",
"begin": "2021-02-05T03:50:00Z",
"data": [
[
"2021-02-05T03:51:00Z",
0.3873639822039385
],
[
"2021-02-05T03:51:05Z",
0.5580795481827524
],
[
"2021-02-05T03:51:10Z",
0.2614992751815726
]
],
"end": "2021-02-05T03:55:00Z",
"resolution": "5sec",
"seriesId": "sensor01"
}
When you query the time series data for a time range that you specify by begin
and end
query parameters,
the returned data points will be have the "statistical" value that is calculated by some functions for a certain period of time.
The aggregation
parameter and the resolution
parameter serve for this functionality.
Based on those two parameters, all data points in the time bucket corresponding to the resolution is aggregated into one value.
With the aggregation
parameter, you can specify the ways to calculate the values.
The following are the available aggregation
functions.
avg
: average of data point valuescount
: number of data pointsmax
: maximum value in the data pointsmin
: minimum value in the data pointssum
: sum of all data point values
If you don't specify the aggregation
parameter, avg
will be used by default.
On the other hand, the resolution
parameter will be used for specifying the size of the "time bucket" for data point aggregation.
The resolution
parameter must be one of the following:
5sec
: 5 seconds15sec
: 15 seconds1min
: 1 minute10min
: 10 minutes1hour
: 1 hour1day
: 1 day1week
: 1 week1month
: 1 month
If you don't specify the resolution
parameter, the "resolution" of the response data points will automatically be decided
based on how long the time range between begin
and end
is.
In this example, the aggregation
parameter is "avg" and TSDB chooses 5 seconds resolution.
The all values are the average of all data point values in the 5 seconds time bucket.
The following table describes how the resolution is determined:
- TimespanResolution
- under 5 minutesn/a
- 5 to 15 minutes5 seconds
- 15 minutes to 1 hour15 seconds
- 1 to 12 hours1 minute
- 12 hours to 5 days10 minutes
- 5 to 28 days1 hour
- 28 days to 1 year1 day
- 1 to 5 years1 week
- over 5 years1 month
Note that this "resolution" decision table also represents the "optimal resolution" for the queried time span.
If the specified resolution
granularity is finer than this "optimal resolution", the query request will fail.
For example, if you query with a 1-day time span, the optimal resolution will be 10 minutes, according to the above table.
So you cannot specify more granular resolution
; i.e. 1min
, 15sec
and 5sec
resolutions will be invalid for a 1-day time span query.
There is another time series data fetching endpoint to retrieve Time Series Collection data.
Endpoint for querying Time Series Collection data
The following query is an example to retrieve Time Series Collection data
(613 corresponds to :homeId
, tsdb corresponds to :moduleId
, and sensor02 corresponds to :collectionId
in this example):
Example query to retrieve Time Series Collection aggregated data in a time range
It may respond the following data:
Response body JSON example
{
"aggregation": "avg",
"begin": "2021-04-23T11:00:00Z",
"collectionId": "collection1",
"data": [
[
"2021-04-23T11:30:00Z",
55.654474949260376,
40.247968770891774,
79.13341813531896
],
[
"2021-04-23T11:31:00Z",
37.73791800284003,
66.95997407077768,
73.30174656726365
],
[
"2021-04-23T11:32:00Z",
45.113682731510686,
33.572772827058316,
57.457421834391425
],
[
"2021-04-23T11:33:00Z",
48.881242608302045,
46.997722298114724,
44.719339559120826
],
[
"2021-04-23T11:34:00Z",
57.358925668817506,
45.99100515144011,
54.23919972714236
]
],
"end": "2021-04-23T12:00:00Z",
"resolution": "1min"
}
How resolution detection and data aggregation work is the same as Simple Time Series query.
Data resolution is automatically decided based on the querying time range by TSDB and data will be aggregated by
the function which is specified by the aggregation
query parameter.
To know more about this, please refer to the former part of this section explaining
how to query Simple Time Series data in a certain time range.
What's different from the Simple Time Series data retrieving endpoint is the selectValues
query parameter.
By giving the name of the values in the collection, all the values associated with the valueNames will be returned.
The values are contained in the data
field.
Each element in the data field contains a timestamp of the data and selected values.
In this example, a, b and c are the valueNames specified.
The first element after the timestamp in a data array is the corresponding value to value name "a",
followed by the values associated with value name "b" and "c".
Note that the order of the values following the timestamp element in each entry in
the data field maintains the order of the valueNames
specified in the valueNames
query parameter.
Example query to retrieve Simple Time Series data
An Important Note Regarding Data Availability
Data points added to a time series are available to be retrieved via the query API if the data points have timestamps that are close to the "current time" (within the last hour). However, if the data points have timestamps in the past, they may not show up on the query results immediately after injection. For most time resolutions, such data will become available about 10 minutes after injection. If the query results are based on daily, weekly or monthly aggregation, the newly added data will be accounted for 4 hours after injection.
Query raw time series data points
This is an alternative way to fetch time series data. Instead of fetching data points by time ranges, you can fetch raw data points before or after a particular point in time. The endpoints are the same with query-by-time-range.
The following query is an example to retrieve raw Simple Time Series data:
Example query to retrieve Simple Time Series raw data
You should specify the ts
and limit
query parameters instead of specifying a time range using the begin
and end
query parameters.
TSDB looks up the data after the timestamp specified by the ts
query parameter, and returns raw data points up to the number specified by the limit
query parameter.
The ts
parameter should be an RFC3335 timestamp string and the limit
parameter is integer value ranging from -500 to 500.
The query above may return the data below for example.
Response body example
{
"data": [
[
"2021-02-05T03:51:02.092Z",
0.6977948170480295
],
[
"2021-02-05T03:51:03.093Z",
0.15605683810915294
],
[
"2021-02-05T03:51:04.095Z",
0.30824029145463294
],
[
"2021-02-05T03:51:05.099Z",
0.4839696867789344
],
[
"2021-02-05T03:51:06.1Z",
0.13766684730631865
]
],
"limit": 5,
"seriesId": "sensor01",
"ts": "2021-02-05T03:50:00Z"
}
Just like Simple Time Series, you can retrieve raw Time Series Collection data points from the Time Series Collection data endpoint by giving ts
and limit
query parameters.
Example query to retrieve Simple Time Series raw data
The response will be like:
Response body example
{
"collectionId": "collection1",
"data": [
[
"2021-04-23T11:30:00Z",
61.677719391534524,
19.795016603425562,
56.19907438925411,
"000000-0",
"000000-1"
],
[
"2021-04-23T11:30:17Z",
82.84097503375006,
46.01072343282841,
92.87654410647967,
"000001-0",
"000001-1"
],
[
"2021-04-23T11:30:34Z",
26.732717188744186,
83.8159955588517,
77.29658524435376,
"000002-0",
"000002-1"
],
[
"2021-04-23T11:30:51Z",
51.36648818301273,
11.37013948846142,
90.1614688011883,
"000003-0",
"000003-1"
],
[
"2021-04-23T11:31:08Z",
44.40027801979037,
33.895096065202196,
99.10470022134655,
"000004-0",
"000004-1"
]
],
"limit": 5,
"ts": "2021-04-23T11:00:00Z"
}
In addition to the ts and limit
query parameters, you should include the valueNames
query parameter.
Also you can optionally specify the tagNames
query parameter.
In this example, the specified valueNames
are "a", "b" and "c" and tagNames
are "x" and "y".
The first element in the data is the timestamp,
the 2nd to the 4th elements are values corresponding to valueNames
,
and the 5th and the 6th elements are tag values corresponding to tagNames
.
Note that for both Simple Time Series and Time Series Collection raw data retrieving queries,if the limit
parameter is a positive integer,
the data points returned are the ones which have timestamps after the ts
parameter and the order of the data points is ascending by timestamp (older to newer);
on the other hand, if the limit
parameter is negative, the data points are the ones whose timestamps are before the ts
parameter and the order of the data points is descending by timestamp (newer to older).
Query boundaries of time series
Sometimes we want to find the time boundaries of a time series, i.e. when the series starts and when it ends. There are two endpoints for that. One is for retrieving the time range of Simple Time Series and the other is for Time Series Collection. They are very similar:
Time range endpoint for Simple Time Series
Response body example
{
"seriesId": "sensor01",
"begin": "2017-03-03T12:00:00Z",
"end": "2018-04-13T06:23:22Z"
}
Time range endpoint for Time Series Collection
Response body example
{
"collectionId": "sensor01",
"begin": "2017-03-03T12:00:00Z",
"end": "2018-04-13T06:23:22Z"
}
Exporting time series data
You may export the raw data of one or more time series as CSV files. TSDB will pack and compress the CSV files into a single ZIP file for download.
To export time series data, follow these steps.
Initiate an export
Make a POST request to the following API endpoint:
The body of this request is a JSON object with the fields begin, end and sensorIds. The begin and end fields define the time range from which data will be exported, and must be strings in RFC3339 format. If you want to export Simple Time Series, specify the seriesIds field. If you want to export Time Series Collections, specify the collectionIds field. Both seriesIds field and collectionIds field should be the array of strings which. The following JSON data are examples to export two types of time series.
Request body example to export Simple Time Series
{
"begin": "2017-02-01T00:00:00Z",
"end": "2017-02-01T13:00:00Z",
"seriesIds": ["sensor01", "sensor02"]
}
Request body example to export Time Series Collections
{
"begin": "2017-02-01T00:00:00Z",
"end": "2017-02-01T13:00:00Z",
"collectionIds": ["sensor_collection01", "sensor_collection02"]
}
Confirm export status
The return object of the above call will look similar to the following example:
Response body example
{
"dataUrl": "https://s3-us-west-2.amazonaws.com/scdata.tinkermode.com/export/_HOME_ID_/XXXX.zip…",
"statusUrl": "https://s3-us-west-2.amazonaws.com/scdata.tinkermode.com/export/_HOME_ID_/XXXX.json…"
}
dataUrl
is the URL where the final exported zipped file can be accessed.
The URL is for one-time use only and expires in one hour.
Your HTTP call to the data URL may fail right after you receive the response from the export POST call.
Because gathering the data may take a while, you need to wait until the data is ready before accessing dataUrl
.
To know when it is ready, you need to poll statusUrl
(GET repeatedly).
statusUrl
is the URL string to tell if the data export is successful.
While an export is still in progress, it returns a 404 ("Not found") HTTP status.
If the data is ready to be downloaded, a GET request to statusUrl
returns the following JSON object:
{ "status": "SUCCESS" }
If an error occurs, it returns:
{ "status": "ERROR" }
Download export file
After confirming the export was successful, your app can now download the exported data by making a GET request to dataUrl
.
Deleting time series
Time series data belonging to a home is deleted when the home is deleted. Similarly, if a TSDB smart module is deleted from a project, or when the entire project is deleted, all associated time series data will be deleted.
You may also choose to manually delete individual time series.
You may send a DELETE
request to the following endpoints.
There are no parameters required.
Endpoint to delete a Simple Time Series
Endpoint to delete a Time Series Collection
Please note that time series deletion takes effect immediately and is irreversible.
Monitoring time series
Once you have started storing data into TSDB, you may want to get notified when certain events happen. For example, if you are tracking the temperature of a room, you may want to get notified when the temperature exceeds a certain threshold. Or, you may want to know when the sensor you are getting data faces some trouble and is not able to send the data at all. For these use cases, you can use TSDB's monitoring functionality.
Create a monitor
You can create a new monitor with the following endpoint:
Endpoint to create a monitor
The following JSON is an example of the request body:
Request body example to create a Monitor
{
"name": "humidity monitor",
"description": "humidity monitor",
"evaluationDelay": 60,
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "room1_humidity",
"interval": 120
},
"enabled": true
}
And if the request succeeds, it will reply with a JSON that looks like the following:
Response body example to create a Monitor
{
"id": 100,
"projectId": 5432,
"moduleId": "sensor_data",
"homeId": 1234,
"name": "humidity monitor",
"description": "humidity monitor",
"evaluationDelay": 60,
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "room1_humidity",
"interval": 120
},
"enabled": true,
"modificationTime": "2020-01-01T12:01:18.105Z",
"isAlerting": true
}
To see the details of this request body, please reference the API document,
but we will explain a little bit more of the "condition" object here.
For the condition
field, two types of monitoring conditions are currently supported.
One of the important fields is evaluationDelay
.
The monitors you create periodically check the time series to see if the certain conditions are met and incidents need to be triggered.
The evaluationDelay
field is optional and 0 by default.
And if the evaluationDelay
field value is 0,
the monitor will check the data in the past from the exact moment when it starts evaluating the data points in the time series.
But, if the evaluationDelay
is set to 60, which means 60 seconds, for example,
the monitor will see the data points which are older than 60 seconds from the moment it starts evaluation.
We think this is a very important field for IoT device monitoring.
Because IoT devices tend to be in poor network circumstances and data emissions are sometimes done in batches with intervals of seconds/minutes.
For such use cases, the data points will be delivered to TSDB Smart Module in the MODE Platform with some delay.
So, with the evaluationDelay field, you can adjust how soon the monitor evaluates their conditions against target time series.
Another important field is condition. There are several types of conditions. "Time Series Heartbeat" condition is one of them. This type of condition is for monitoring the "heartbeat" of Simple Time Series data. As suggested by its name, this condition is to check the presence of data points in a Simple Time Series for a certain period of time. This condition requires the following fields (all fields are required):
- FieldTypeDescription
- conditionTypestringMust be "timeSeriesHeartbeat".
- seriesIdstringA seriesId of Simple Time Series to monitor.
- intervalstringDuration of time in seconds in which at least one data point should be present. Otherwise, the monitor triggers an alert incident.
The following JSON is an example of Time Series Heartbeat condition object which checks if at least one data point is present every 120 seconds in the "room1_humidity" Simple Time Series.
Time Series Heartbeat condition example JSON
{
"conditionType": "timeSeriesHeartbeat",
"seriesId": "room1_humidity",
"interval": 120
}
"Time Series Threshold" condition is another type of condition. This condition is for checking if the value in a Simple Time Series has breached a certain threshold. This should consists of the following fields (all fields are required):
- FieldTypeDescription
- conditionTypestringMust be "timeSeriesThreshold".
- seriesIdstringA seriesId of Simple Time Series to monitor.
- intervalstringDuration of time in seconds to check and count the data points breaching threshold.
- numberOfBreachingDataPointsToAlertnumberWhen the number of the data points breaching the threshold value exceeds this number, the monitor triggers an alert incident.
- dataPointsEvaluationPolicystringHow to count the number of the data points breaching threshold. This value is either "consecutive" or "total". When you choose "consecutive", an alert incident will be triggered when the data points breach the threshold n (numberOfBreachingDataPointsToAlert) times in a row. Meanwhile, when you choose "total", an alert incident will be triggered when the data points breach the threshold n (numberOfBreachingDataPointsToAlert) times over a time (interval).
- thresholdnumberthreshold value
- operatorstringHow to compare the value of a data point against threshold. Must be one of the followings: lessThan, lessThanOrEqualTo, equal, notEqual, greaterThanOrEqualTo, greaterThan
- dataPointsDurationnumberDuration of time in seconds where at least one data point should exist. If there are no data points existing in this period of time, it is seen as "missing data" and treated in accordance to the value of missingDataPointPolicy field.
- missingDataPointPolicystringHow to treat missing data points. Possible options are: ignore, breaching, notBreaching
Threshold-based monitoring is more complicated than Heartbeat-based monitoring. Here we will dig a little bit deeper on how threshold monitoring evaluation works.
Threshold monitoring checks if the values in a Simple Time Series specified by the seriesId field are in an unfavorable state. It determines if the values are in the unfavorable state by using the values of threshold field and operator field; The monitor compares the actual value and the threshold field value in the way specified by the operator field. For example, if the threshold field is 10.0 and the operator field is "lessThan", values like 9.99, 8.7 or 5.7 are seen as satisfying the condition and those data points are evaluated as "breaching", while values like 10.0, 1.4 or 1000.997 are seen as not satisfying the condition and those data points are evaluated as "notBreaching".
Given this basis, whether the alert incident will be triggered or not is determined by the combination of the values in the interval field,
the numberOfBreachingDataPointsToAlert
field and the dataPointsEvaluationPolicy
field.
During the evaluation process, the monitor counts the number of the data points whose values breach the condition determined by the threshold
field value and the operator field value.
The monitor will trigger an alert incident only when this number of breaching data points exceeds the value of the numberOfBreachingDataPointsToAlert
field.
Note that the dataPointsEvaluationPolicy
field affects how to count the number of breaching data points.
If you choose "consecutive", it counts the number of breaching data points in a row.
For example, if the numberOfBreachingDataPoints
field is 3 and the dataPointsEvaluationPolicy
field is "consecutive",
and there are 5 data points which are "breaching", "breaching", "notBreaching", "breaching" and "notBreaching" in the period of the time determined by the interval field,
the monitor won't trigger an alert incident because the maximum number of "consecutive" number of breaching data points is 2
(first and second data points) which is less than the numberOfBreachingDataPoints
field.
If the 5 data points are like "breaching", "breaching", "breaching", "notBreaching" and "notBreaching",the monitor sees it as an alert state.
On the other hand, if the dataPointsEvaluationPolicy
field is "total",
the monitor simply counts the number of breaching data points just over a period of time specified by the interval field.
For example, if the numberOfBreachingDataPoints
field is 3 and the dataPointsEvaluationPolicy
field is "total",
and there are 5 data points, "breaching", "breaching", "notBreaching", "breaching" and "notBreaching" in a period of the time,
the monitor will trigger an alert incident because there are 3 breaching data points over this period of time, even though they are not consecutive.
There are two more important fields that affect how TSDB counts the number of breaching data — the dataPointsDuration
field and the missingDataPointPolicy
field.
The dataPointsDuration
field stands for the expected duration of time in seconds during which at least one data point should be present.
If there are no data points in the time buckets determined by the dataPointsDuration
field, the monitor regards that there are missing data points.
The value of missingDataPointPolicy
field determines how the monitor should treat those missing data points.
If the missingDataPointPolicy
field is "ignore", the missing data will be just ignored and therefore it won't affect the count of breaching data points at all.
Meanwhile, if the missingDataPointPolicy
field is either "breaching" or "notBreaching", it will affect the breaching data points counting.
If the missingDataPointPolicy
is "breaching", missing data are evaluated as "breaching", meanwhile it will be evaluated as "notBreaching" if the missingDataPointPolicy
is "notBreaching".
Get monitors
With the following endpoints, you can get and see a single or multiple monitors you've created.
Endpoint to retrieve single monitor
Endpoint to retrieve multiple monitors
The endpoint to retrieve multiple monitors can hold many monitors.
You can traverse the full monitors list with the skip query parameter and the limit
query parameter, both of which are optional;
if they are omitted, skip
will be 0 and limit
will be 20 by default.
Update existing monitors
You can update existing monitors with the following endpoint:
Endpoint to update existing monitors
The format of the requesting data format is the same as the one with creation. The difference from creation is that you don't have to specify all the fields. It's okay for you to just specify the field that you want to update. So, if you just want to update monitor condition, the request body looks like the following JSON:
Request body example to update a Monitor
{
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "room1_humidity",
"interval": 240
}
}
Please note that you cannot omit the field for each type of condition and you cannot change the type of condition once you have created a monitor. So if a monitor's condition is a "Time Series Heartbeat" condition object, you cannot change it to an "Time Series Threshold" condition object, for example.
Get incidents
When the monitors find something wrong in their target time series, they will add new incidents to their incident history. You can retrieve those incidents associated with a certain monitor with the following endpoint:
Endpoint to retrieve existing incidents
With this endpoint, you can retrieve the incidents that belong to your Home.
When you query, you must specify the monitorId
query parameter.
The following query is an example to fetch the incidents:
The response to this query looks like below:
Response body example
[
{
"projectId": 1234,
"moduleId": "tsdb",
"homeId": 100,
"incidentType": "recovery",
"description": "RECOVERY: humidity sensor connection is back to normal",
"subject": "timeSeries:humidity",
"monitorId": 1347,
"monitorSnapshot": {
"name": "humidity sensor heartbeat",
"description": "",
"evaluationDelay": 60,
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "humidity",
"interval": 60
},
"incidentCustomDescriptions": {
"alert": "ALERT: check if the humidity sensor is connected",
"recovery": "RECOVERY: humidity sensor connection is back to normal"
}
},
"evaluationTimeRange": {
"begin": "2021-03-01T18:08:09Z",
"end": "2021-03-01T18:09:09Z"
},
"creationTime": "2021-03-01T18:10:09Z"
},
{
"projectId": 1234,
"moduleId": "tsdb",
"homeId": 100,
"incidentType": "alert",
"description": "ALERT: check if the humidity sensor is connected",
"subject": "timeSeries:humidity",
"monitorId": 1347,
"monitorSnapshot": {
"name": "humidity sensor heartbeat",
"description": "",
"evaluationDelay": 60,
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "humidity",
"interval": 60
},
"incidentCustomDescriptions": {
"alert": "ALERT: check if the humidity sensor is connected",
"recovery": "RECOVERY: humidity sensor connection is back to normal"
}
},
"evaluationTimeRange": {
"begin": "2021-03-01T17:00:08Z",
"end": "2021-03-01T17:01:08Z"
},
"creationTime": "2021-03-01T17:02:08Z"
}
]
The JSON data returned is an array of incident objects.
There are two types of incidents. One is the "alert" incident, while the other is the "recovery" incident. An alert incident is created when the values in a time series breach its monitoring condition. On the other hand, the recovery incident will be triggered when a time series is back to normal regarding its monitoring condition. If you want to see the details of the incident data, please refer to the explanation of "The incident object" in the API document.
Receive incident events
As we see above, you can retrieve the incidents that have happened in your Home. You can also receive the incidents via MODE Events. If you are not familiar with MODE Events, please have a look at How to handle device events and commands and Webhooks Smart Module.
As there are two types of incidents, alert and recovery, there are two types of MODE Events which correspond to alert incident and recovery incident respectively. If the incident is "alert", the eventType is "mod-tsdb-monitorAlertIncidentInvoked". If the incident is "recovery", the eventType is "mod-tsdb-monitorRecoveryIncidentInvoked". Both two eventTypes are MODE predefined events.
A monitoring incident event looks like the following examples:
Alert Incident Event example
{
"eventType":"_mod-tsdb-monitorAlertIncidentInvoked_",
"eventData":{
"incident":{
"projectId": 1234,
"moduleId": "tsdb",
"homeId": 100,
"incidentType": "alert",
"description": "ALERT: check if the humidity sensor is connected",
"subject": "timeSeries:humidity",
"monitorId": 1347,
"monitorSnapshot": {
"name": "humidity sensor heartbeat",
"description": "",
"evaluationDelay": 60,
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "humidity",
"interval": 60
},
"incidentCustomDescriptions": {
"alert": "ALERT: check if the humidity sensor is connected",
"recovery": "RECOVERY: humidity sensor connection is back to normal"
}
},
"evaluationTimeRange": {
"begin": "2021-03-01T17:00:08Z",
"end": "2021-03-01T17:01:08Z"
},
"creationTime": "2021-03-01T17:02:08Z"
}
},
"homeId":100,
"timestamp":"2021-03-01T17:02:09Z"
}
Recovery Incident Event example
{
"eventType":"_mod-tsdb-monitorRecoveryIncidentInvoked_",
"eventData":{
"incident":{
"projectId": 1234,
"moduleId": "tsdb",
"homeId": 100,
"incidentType": "recovery",
"description": "RECOVERY: humidity sensor connection is back to normal",
"subject": "timeSeries:humidity",
"monitorId": 1347,
"monitorSnapshot": {
"name": "humidity sensor heartbeat",
"description": "",
"evaluationDelay": 60,
"condition": {
"conditionType": "timeSeriesHeartbeat",
"seriesId": "humidity",
"interval": 60
},
"incidentCustomDescriptions": {
"alert": "ALERT: check if the humidity sensor is connected",
"recovery": "RECOVERY: humidity sensor connection is back to normal"
}
},
"evaluationTimeRange": {
"begin": "2021-03-01T18:08:09Z",
"end": "2021-03-01T18:09:09Z"
},
"creationTime": "2021-03-01T18:10:09Z"
}
},
"homeId":100,
"timestamp":"2021-03-01T18:10:10Z"
}
To see the details of the incident object, please refer to the explanation of "The incident object" in the API document.
Delete monitor
You can delete the monitor with the following endpoint:
Endpoint to delete a monitor
Note that this operation will remove the specified monitor immediately and irrevocably. And incidents associated with deleted monitors are also removed. Please make sure of your intention before deleting a monitor.
Troubleshooting TSDB
A helpful tool in managing TSDB is the system logs. You can find the logs in the Developer Console by selecting the TSDB Smart Module.
This tool can be helpful when troubleshooting issues with data injection. In the example below, you can see the error message "Time Series DB [tsdb] did not store data of event [timeSeriesData] (no time series data found in event data)", which indicates an incorrectly formatted event data field.
