Tines: On-premises deployments, monitor your stories sizes in the database and disk (events + payloads) by querying the database itself and processing it with Tines
The problem
As much as I like Tines (deployed on-premise) their self-hosted metrics/statistics could use some improvements, there is no 1-to-1 link or view between stories and their sizes in the database (events + payloads) so you are going in blind, and in some cases you discover your Postgres database has grown from a manageable 50GB to a whopping 400GB’s so it was time to implement some monitoring on the usage of Tines stories.
Solution
- Have a cronjob fire up a bash script that scans a directory for SQL queries to execute.
- Have a SQL script to collect the container id’s and their associated events sizes and payload sizes
- Have the script beam this data back to a tines webhook, keyed under the scripts filename
- Process the JSON data with Tines, and build a nice markdown table.
- up to you to determine what happens next
Example output
(Markdown screenshot, HTML table below)
Name | Total Size (GB) | Total Size (MB) | Events Size (GB) | Events Size (MB) | Output Payload Size (GB) | Output Payload Size (MB) | Keep Events For (Days) |
---|---|---|---|---|---|---|---|
Total | 180 | 184 | 68 | 69632 | 112 | 114688 | N/A |
Event Enrichment | 83 | 84992 | 64 | 65536 | 18 | 18432 | 14 |
Data Analysis | 11 | 11264 | 1 | 1024 | 9 | 9216 | 21 |
Threat Analysis | 3 | 3072 | 1 | 1024 | 2 | 2048 | 7 |
Log Search | 4 | 4096 | 1 | 1024 | 3 | 3072 | 31 |
Alert Handling | 3 | 3072 | 1 | 1024 | 3 | 3072 | 14 |
Campaign Import | 1 | 1024 | 0 | 0 | 1 | 1024 | 21 |
Email Notifications | 1 | 1024 | 0 | 0 | 0 | 0 | 31 |
System Wait | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Security Monitoring | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
The implementation
The shell script
NOTE: Script was “made” by OpenAI o1-preview
NOTE: This targets Tines DB running in a docker
You want to tweak the variables
# Variables DOCKER_CONTAINER_NAME="tines_db" WEBHOOK_URL="https://tines.internal/webhook/abc123xyz4567890/abcdef1234567890abcdef1234567890" OUTPUT_FILE_PREFIX="tines_table_monitor" LOG_FILE="tines_monitor.log" QUERIES_DIR="queries"
#!/bin/bash set -e # Uncomment this line to enable script tracing for debugging # set -x # Variables DOCKER_CONTAINER_NAME="tines_db" WEBHOOK_URL="https://tines.internal/webhook/abc123xyz4567890/abcdef1234567890abcdef1234567890" OUTPUT_FILE_PREFIX="tines_table_monitor" LOG_FILE="tines_monitor.log" QUERIES_DIR="queries" # Function to log messages with timestamp log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" } # Start script execution log "Starting Tines monitor script." # Retrieve the Docker container ID log "Retrieving Docker container ID for container named '$DOCKER_CONTAINER_NAME'." CONTAINER_ID=$(docker ps -q --filter "name=$DOCKER_CONTAINER_NAME") if [[ -z "$CONTAINER_ID" ]]; then log "Error: No running Docker container found with name '$DOCKER_CONTAINER_NAME'." exit 1 else log "Found Docker container ID: $CONTAINER_ID." fi # Function to execute SQL query inside Docker and return JSON output execute_sql_query() { set +e # Disable exit on error within this function local query="$1" local output local exit_code output=$(docker exec "$CONTAINER_ID" \ psql -t -A -U tines -d tines_production -c "$query" 2>&1) exit_code=$? if [[ $exit_code -ne 0 ]]; then log "Error: Failed to execute SQL query." log "Exit Code: $exit_code" log "Output: $output" exit 1 fi set -e # Re-enable exit on error # Remove any PSQL notices or empty lines output=$(echo "$output" | sed '/^$/d') # Validate JSON or handle NULL output if [[ -z "$output" ]] || [[ "$output" == "null" ]]; then log "Notice: Query returned NULL or empty result." output="[]" # Set output to an empty JSON array else if ! echo "$output" | jq empty >/dev/null 2>&1; then log "Error: Invalid JSON output." log "Output: $output" exit 1 fi fi echo "$output" } # Function to send output to webhook send_to_webhook() { local data="$1" local key="$2" local payload="{\"$key\": $data}" log "Sending data to webhook with key '$key'." CURL_RESPONSE=$(curl -s -w "%{http_code}" -X POST -H "Content-Type: application/json" --data "$payload" "$WEBHOOK_URL" -o /dev/null) if [[ "$CURL_RESPONSE" -ne 201 ]]; then log "Error: Failed to send data to webhook. HTTP status code: $CURL_RESPONSE." exit 1 else log "Data sent to webhook successfully with key '$key'. HTTP status code: $CURL_RESPONSE." fi } # Check if queries directory exists if [[ ! -d "$QUERIES_DIR" ]]; then log "Error: Queries directory '$QUERIES_DIR' does not exist." exit 1 fi # Process each .sql file in the queries directory for sql_file in "$QUERIES_DIR"/*.sql; do # Check if any .sql files exist if [[ ! -e "$sql_file" ]]; then log "Notice: No .sql files found in '$QUERIES_DIR'." break fi # Get the base filename without extension to use as key filename=$(basename "$sql_file") key="${filename%.*}" # Remove extension log "Processing SQL file '$sql_file' with key '$key'." # Read the SQL query from the file query=$(cat "$sql_file") # Execute the SQL query log "Executing SQL query from file '$sql_file'." output=$(execute_sql_query "$query") log "SQL query executed successfully." # Save output to file output_file="${OUTPUT_FILE_PREFIX}_${key}.json" log "Saving query output to file '$output_file'." echo "$output" > "$output_file" if [[ $? -ne 0 ]]; then log "Error: Failed to write output to '$output_file'." exit 1 else log "Output saved to '$output_file'." fi # Send output to webhook send_to_webhook "$output" "$key" done log "Tines monitor script completed successfully."
The SQL scripts
Track event size and payload sizes per story
SELECT json_agg(row_to_json(t)) FROM ( SELECT s.id AS storyID, s.story_container_id AS story_container_ID, -- Calculate the size of the events table data SUM(COALESCE(pg_column_size(e), 0)) AS eventsBaseSize, -- Sum sizes of all large columns in events SUM(COALESCE(octet_length(e.payload::text), 0)) + SUM(COALESCE(octet_length(e.generated_payload::text), 0)) + SUM(COALESCE(octet_length(e.previous_events_ids::text), 0)) + SUM(COALESCE(octet_length(e.new_previous_events_ids::text), 0)) + SUM(COALESCE(octet_length(e.received_payload_ids::text), 0)) AS eventsLargeColumnsSize, -- Total events size SUM(COALESCE(pg_column_size(e), 0)) + SUM(COALESCE(octet_length(e.payload::text), 0)) + SUM(COALESCE(octet_length(e.generated_payload::text), 0)) + SUM(COALESCE(octet_length(e.previous_events_ids::text), 0)) + SUM(COALESCE(octet_length(e.new_previous_events_ids::text), 0)) + SUM(COALESCE(octet_length(e.received_payload_ids::text), 0)) AS eventsSize, -- Calculate the size of the output_payloads table data SUM(COALESCE(pg_column_size(o), 0)) AS outputPayloadsBaseSize, -- Sum sizes of all large columns in output_payloads SUM(COALESCE(octet_length(o.output::text), 0)) AS outputPayloadsLargeColumnsSize, -- Total output_payloads size SUM(COALESCE(pg_column_size(o), 0)) + SUM(COALESCE(octet_length(o.output::text), 0)) AS outputPayloadSize, -- Total size ( SUM(COALESCE(pg_column_size(e), 0)) + SUM(COALESCE(octet_length(e.payload::text), 0)) + SUM(COALESCE(octet_length(e.generated_payload::text), 0)) + SUM(COALESCE(octet_length(e.previous_events_ids::text), 0)) + SUM(COALESCE(octet_length(e.new_previous_events_ids::text), 0)) + SUM(COALESCE(octet_length(e.received_payload_ids::text), 0)) + SUM(COALESCE(pg_column_size(o), 0)) + SUM(COALESCE(octet_length(o.output::text), 0)) ) AS totalSize FROM stories s JOIN agents a ON a.story_id = s.id JOIN events e ON e.agent_id = a.id LEFT JOIN output_payloads o ON o.id = e.output_payload_id GROUP BY s.id, s.story_container_id ) t;
Keep an eye on old events
SELECT json_agg(row_to_json(t)) FROM ( SELECT id, created_at FROM events ORDER BY created_at ASC LIMIT 200 ) t;
Keep an eye on old payloads
SELECT json_agg(row_to_json(t)) FROM ( SELECT id, created_at FROM output_payloads ORDER BY created_at ASC LIMIT 200 ) t;
Notes
Depending on your deployment you will have to increase /dev/shm in the Postgress container to be able to deal with the larger queries.
The Tines processing
So we send the JSON output of the scrip to Tines.
As each SQL script requires different processing, we check the .SQL file that send us the data, in our case the monitor_table_size.sql as this is our primary focus
The Tines flow
{"standardLibVersion":"59","actionRuntimeVersion":"13","agents":[{"disabled":false,"name":"monitor.sh","description":null,"options":"{\"path\":\"7dd3290ffab74fdce0217c9523c8f6ca\",\"secret\":\"d5ffb328d2e25f763ac453eb981282ea\",\"verbs\":\"get,post\"}","position":{"x":495,"y":-15},"type":"webhook","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":true,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"self-hosted:258783525fbbd9621a6129e1531a01b9:ce872e3c829ff9d228d017c76b7317c1"},{"disabled":false,"name":"payload","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":\"=VALUES(input.payload) |> FIRST(%)\"}","position":{"x":495,"y":240},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":true,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"[Tines] Query story","description":null,"options":"{\"url\":\"<<RESOURCE.tines_api.url>>/api/v1/stories/<<explode.entry.story_container_id>>\",\"content_type\":\"application_json\",\"method\":\"get\",\"payload\":\"\",\"headers\":{\"content-type\":\"application/json\",\"x-user-email\":\"\",\"x-user-token\":\"<<CREDENTIAL.tines_api_key_colin>>\"}}","position":{"x":495,"y":465},"type":"httpRequest","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":true,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"explode","description":null,"options":"{\"mode\":\"explode\",\"path\":\"=payload\",\"to\":\"entry\",\"limit\":\"500\"}","position":{"x":495,"y":345},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"monitor_table_size.sql","description":null,"options":"{\"rules\":[{\"type\":\"field==value\",\"value\":\"monitor_table_size\",\"path\":\"<<KEYS(input.payload) |> FIRST(%)>>\"}]}","position":{"x":495,"y":120},"type":"trigger","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"prep4implode","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":\"=explode.entry\\n|> MERGE(%, tines_query_story.body)\\n|> \\n\\n SET_KEY(%, \\\"eventssize_mb\\\", DIVIDED_BY(explode.entry.eventssize,1000000.0))\\n |>\\n SET_KEY(%, \\\"eventssize_gb\\\", DIVIDED_BY(explode.entry.eventssize,1000000000.0))\\n \\n|>\\n\\n SET_KEY(%, \\\"outputpayloadsize_mb\\\", DIVIDED_BY(explode.entry.outputpayloadsize,1000000.0))\\n |>\\n SET_KEY(%, \\\"outputpayloadsize_gb\\\", DIVIDED_BY(explode.entry.outputpayloadsize,1000000000.0))\\n\\n|>\\n\\n SET_KEY(%, \\\"totalsize_mb\\\", DIVIDED_BY(explode.entry.totalsize,1000000.0))\\n |>\\n SET_KEY(%, \\\"totalsize_gb\\\", DIVIDED_BY(explode.entry.totalsize,1000000000.0))\\n \"}","position":{"x":495,"y":555},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"implode","description":null,"options":"{\"mode\":\"implode\",\"item_path\":\"=prep4implode\",\"guid_path\":\"=explode.guid\",\"size_path\":\"=explode.size\"}","position":{"x":495,"y":660},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"sort","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":\"=SORT(\\n \\n implode,\\n \\n \\\"totalsize\\\")\\n \\n \\n|> REVERSE(%)\"}","position":{"x":720,"y":-15},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"sum","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":{\"k\":\"=sort\\n|> MAP(%,\\\"totalsize\\\")\\n|> SUM(%)\",\"mb\":\"=sort\\n|> MAP(%,\\\"totalsize\\\")\\n|> DIVIDED_BY(SUM(%),1000000.0)\",\"gb\":\"=sort\\n|> MAP(%,\\\"totalsize\\\")\\n|> DIVIDED_BY(SUM(%),1000000000.0)\"}}","position":{"x":720,"y":90},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"markdown","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":\"| Name | Total Size (GB) | Total Size (MB) | Events Size (GB) | Events Size (MB) | Output Payload Size (GB) | Output Payload Size (MB) | Keep Events For (Days) |\\n|-----------------|------------|---------|------------|------------------|------------------|-------------------|----------------|\\n| **Total** | <<ROUND(sum_check.sum_totalsize_gb)>> | <<ROUND(sum_check.sum_totalsize_mb)>> | <<ROUND(sum_check.sum_eventssize_gb)>> | <<ROUND(sum_check.sum_eventssize_mb)>> | <<ROUND(sum_check.sum_outputpayloadsize_gb)>> | <<ROUND(sum_check.sum_outputpayloadsize_mb)>> | N/A |\\n<%for x in DEFAULT(sort_remap,implode)%>| [<<RESOURCE.handy_lambdas.substring_dotdotdot(x.name,30)>>](https://tines.internal/stories/<<x.story_container_id>>) | <<ROUND(x.sum_totalsize_gb)>> | <<ROUND(x.sum_totalsize_mb)>> | <<ROUND(x.sum_eventssize_gb)>> | <<ROUND(x.sum_eventssize_mb)>> | <<ROUND(x.sum_outputpayloadsize_gb)>> | <<ROUND(x.sum_outputpayloadsize_mb)>> | <<x.keep_events_for / 86400>> |\\n<%endfor%>\\n\"}","position":{"x":720,"y":660},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"story_container_id_unique","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":\"=MAP(implode,\\\"story_container_id\\\") |> UNIQ(%)\"}","position":{"x":720,"y":180},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"remap","description":null,"options":"{\"mode\":\"message_only\",\"loop\":\"=story_container_id_unique\",\"payload\":\"=implode\\n|> WHERE(%,\\\"story_container_id\\\",LOOP.value) |> FIRST(%)\\n\\n|> SET_KEY(%,\\\"sum_count\\\", SIZE((WHERE(implode,\\\"story_container_id\\\",LOOP.value))))\\n\\n|> SET_KEY(%,\\\"sum_eventssize\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"eventssize\\\")))\\n\\n|> SET_KEY(%,\\\"sum_outputpayloadsize\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"outputpayloadsize\\\")))\\n\\n|> SET_KEY(%,\\\"sum_totalsize_mb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"totalsize_mb\\\")) )\\n\\n|> SET_KEY(%,\\\"sum_eventssize_mb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"eventssize_mb\\\")))\\n\\n|> SET_KEY(%,\\\"sum_outputpayloadsize_mb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"outputpayloadsize_mb\\\")))\\n\\n|> SET_KEY(%,\\\"sum_totalsize_mb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"totalsize_mb\\\")) )\\n\\n|> SET_KEY(%,\\\"sum_eventssize_gb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"eventssize_gb\\\")))\\n\\n|> SET_KEY(%,\\\"sum_outputpayloadsize_gb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"outputpayloadsize_gb\\\")))\\n\\n|> SET_KEY(%,\\\"sum_totalsize_gb\\\", SUM(MAP(WHERE(implode,\\\"story_container_id\\\",LOOP.value),\\\"totalsize_gb\\\")) )\"}","position":{"x":720,"y":255},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"sort remap","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":\"=SORT(remap,\\\"sum_eventssize_gb\\\") |> REVERSE(%)\"}","position":{"x":720,"y":375},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"},{"disabled":false,"name":"sum check","description":null,"options":"{\"mode\":\"message_only\",\"loop\":false,\"payload\":{\"sum_count\":\"=MAP(remap,\\\"sum_count\\\") |> SUM(%)\",\"sum_eventssize\":\"=MAP(remap,\\\"sum_eventssize\\\") |> SUM(%)\",\"sum_outputpayloadsize\":\"=MAP(remap,\\\"sum_outputpayloadsize\\\") |> SUM(%)\",\"sum_totalsize_mb\":\"=MAP(remap,\\\"sum_totalsize_mb\\\") |> SUM(%)\",\"sum_eventssize_mb\":\"=MAP(remap,\\\"sum_eventssize_mb\\\") |> SUM(%)\",\"sum_outputpayloadsize_mb\":\"=MAP(remap,\\\"sum_outputpayloadsize_mb\\\") |> SUM(%)\",\"sum_eventssize_gb\":\"=MAP(remap,\\\"sum_eventssize_gb\\\") |> SUM(%)\",\"sum_outputpayloadsize_gb\":\"=MAP(remap,\\\"sum_outputpayloadsize_gb\\\") |> SUM(%)\",\"sum_totalsize_gb\":\"=MAP(remap,\\\"sum_totalsize_gb\\\") |> SUM(%)\"}}","position":{"x":720,"y":510},"type":"eventTransformation","timeSavedUnit":"minutes","timeSavedValue":0,"monitorAllEvents":false,"monitorFailures":false,"monitorNoEventsEmitted":null,"caseConfiguration":{"subStatus":null},"recordType":null,"recordWriters":[],"form":null,"createdFromTemplateGuid":null,"createdFromTemplateVersion":null,"originStoryIdentifier":"cloud:8a398de9ced29eb779cc34d883c19760:a339e89b8aa7a74c7ab8fb476ecd5456"}],"links":[{"sourceIdentifier":"1","receiverIdentifier":"3"},{"sourceIdentifier":"0","receiverIdentifier":"4"},{"sourceIdentifier":"4","receiverIdentifier":"1"},{"sourceIdentifier":"5","receiverIdentifier":"6"},{"sourceIdentifier":"6","receiverIdentifier":"7"},{"sourceIdentifier":"7","receiverIdentifier":"8"},{"sourceIdentifier":"8","receiverIdentifier":"10"},{"sourceIdentifier":"10","receiverIdentifier":"11"},{"sourceIdentifier":"11","receiverIdentifier":"12"},{"sourceIdentifier":"12","receiverIdentifier":"13"},{"sourceIdentifier":"13","receiverIdentifier":"9"},{"sourceIdentifier":"2","receiverIdentifier":"5"},{"sourceIdentifier":"3","receiverIdentifier":"2"}],"diagramNotes":[]}
as we are dealing with container_id’s we use Tines internal API’s to get metadata about what ID is what story
The other thing that we have to be mindful of is that groups (?) are split under the same container id, so we have to sum up the usage of the groups per container id.
(1) first we just calculate the sums per entry in the table
we then implode all our calculated stuff per row.
next we grab all the container ID’s that exist in the data
While the approach may not be elegant, it’s effective. The key is in the iteration process.
We identify unique container IDs and then process them as follows:
- For each unique container ID, we sum up all the groups associated with it.
- Each container ID represents a distinct story.
- The groups within a container are combined to form the complete story.
This results in the following structure:
- Story 1: Combines data from Group 1, Group 2, and Group 3
- Story 2: Consists of a single group
- Story 3: Merges data from Group 1 and Group 2
We then do a sum for the top entry of the table of all the data.
NOTE: do check if these numbers are aligned with your database sizes!
then we just loop all our summarized data into markdown
and tadaa 🎉 you now have “monitoring” on how much each story is using in Tines DB size-wise.