OpenTelemetry parameter that might ruin your flexibility

Marcin Sodkiewicz
7 min readNov 22, 2023

--

Metrics temporality and potential cost of not centralizing it.

TL;DR: The temporality of OTEL metrics may ruin your flexibility, as it’s set at the application level and can’t be easily centralised in collectors. You should prepare your platforms to inject an environment variable for it. I know you probably don’t need it today, but that day may come.

What is the issue? Why it affects your flexibility?

I thought my OpenTelemetry setup was super flexible and I could run PoC with any platform very easily by simply changing the collector configuration. There is nothing wrong with my confidence in this area as we have done it several times on different occasions when playing with different exporters.

I was wrong. I had one piece missing — metrics temporality and this is why you are reading this article.

If you don’t know what temporality is, you can read about it in the theoretical section below.

So first question is:

Different types of exporters are using different types of metrics temporalities. What if you would like to migrate to another type of exporter?

The problem to solve is not trivial as metrics have to come from the application which collects them, which means that if the exporter type changes, it could affect ALL of your applications. In the case of a large organisation, this could mean updating all application configurations in different runtimes and types of workloads such as serverless, containerised workloads, kubernetes, etc. This migration can be a huge effort.

Let’s consider temporality type migrations:

CumulativeDelta

It’s not that bad news in case that you would like to switch completely from cumulativedelta metrics temporality as you can use cumulativetodeltaprocessor .

You just have to be careful when using it, as it has to be processed on the same collector instance. There is a solution to this. You can use the loadbalancing exporters layer, which finally supports both traces and metrics.

I wrote about it in the previous article about tail-based sampling here.

There are also some corner cases to consider with such migration that can be found in the theoretical part at the end of article (like losing min & max values in case of histograms or issue with out-of-order observations).

Load balancing exporter

DeltaCumulative

Not possible at the moment on collector level. There is no such processor, although some of exporters supports this type of conversion e.g. Prometheus Exporter according to this ticket (but NOT Prometheus Remote Write Exporter).

There is no out-of-the-box solution for the moment. We can only partially process metrics from delta temporality preference mix (described below).

Solution: Centralized temporality configuration

What can you do to future proof your setup? I’m a big fan of centralising observability configs, but I didn’t centralise all the OTEL env vars. That was a mistake.

Still there is something that we can do here to centralise the OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE value. We could store them e.g. in SSM parameter.

Second question is:

Different types of exporters are using different types of metrics temporalities. What if you would like to use multiple exporters that use different types of temporalities at the same time?

I believe that the only option is to use cumulative metrics temporality and do transformations for delta according to the section below.

Temporality based processing

Processing based on aggregation_temporality attribute

There is a field in the OTEL metric contract that we can use to filter metric temporalities. According to the filter processor documentation, we can filter based on all metric fields and use enums for different types of temporalities, which are documented here.

We could use this information to distribute metrics to different collector pipelines based on their temporality.

cumulative → delta migration scenario
Based on AGGREGATION_TEMPORALITY_CUMULATIVE , we can process only incoming cumulative metrics with the cumulativetodeltaprocessor rather than all incoming metrics.

...
processors:
cumulativetodelta:
filter/includecumulative:
metrics:
datapoint:
- 'metric.aggregation_temporality == AGGREGATION_TEMPORALITY_CUMULATIVE'
...
service:
pipelines:
metrics/delta:
receivers: [otlp]
processors: [filter/includecumulative, cumulativetodelta, ...]
exporters: [newrelic, datadog, lightstep]

delta → cumulative migration scenario
In this migration scenario, after filtering metrics using value for AGGREGATION_TEMPORALITY_CUMULATIVE, we will start getting a subset of metrics for all of our applications depending on the counter type used. Delta temporality uses cumulative & delta temporalities for different types of instrumentation. As specified in the OTEL docs:

Full docs available here

and that way we can quicker get insights into our metrics subset as we progress with migration.

...
processors:
filter/includecumulative:
metrics:
datapoint:
- 'metric.aggregation_temporality == AGGREGATION_TEMPORALITY_CUMULATIVE'
...
service:
pipelines:
metrics/delta:
receivers: [otlp]
processors: [...]
exporters: [newrelic, datadog, lightstep]
metrics/cumulative:
receivers: [otlp]
processors: [filter/includecumulative, ...]
exporters: [prometheusremotewrite, lightstep]

STILL WE WON’T GET DATA FROM COUNTERS USING DELTA TEMPORALITY AND WE STILL HAVE TO GO THROUGH FULL MIGRATION. WE CAN JUST QUICKER GET DATA FROM METRICS SUBSET.

Summary

I hope it helped you on your migration journey, but if you are just reading about OTEL please consider centralising env variable for OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE .

Have a great one, fellow O’Teliers! 🔭

Theoretical part

Metrics temporality is an approach to collecting, storing and reporting application metrics. All of them has their pros and cons.

We have two types of metric temporality, which we will discuss shortly below, that we could quickly summarize as:

  • Delta —reports changes from a given period,
  • Cumulative —periodically reports the accumulated state.

Delta

Application sends “deltas” — changes in the particular period.

Reports changes only after they have occurred. This is very important. It means that our application doesn’t have to worry about the memory consumption of high cardinality metrics and can delegate that processing elsewhere.

Cardinality is size of set of potential values. Example of:
→ low cardinality set: http request method names
→ high cardinality set: customer ids

Memory usage — Synchronous instrumentations
Because we only send “recent observations” to the collector. The risk of memory leaks is really low and the memory consumption is low because we can “reset” counters after sending measurements.

It’s different story for async instrumentations — you can read about it here.

Data loss
Potential issue might be data loss in case of sending dropped samples.

Sending data
Only when new observations happened

Data reporting diagram

Example flow of requests based on example from OTEL Docs

Supported by e.g.:

  • DataDog
  • NewRelic
  • LightStep
  • Dynatrace

Cumulative

Application sends always all measurements as an accumulated value for each combination of labels. It means that it has to track whole history of collected metrics since the beginning of the process.

Memory usage — Synchronous instrumentations
Higher memory consumption in the application as it need to keep state of metrics changes in time. It’s not memory consumption level that will affect an application though. According to Grafana Labs it’s order of magnitude of 10MB per 32000 metrics.
There is risk of cardinality explosion. Since we have to send cumulative measurement for each value combination it means that we have to keep all possible permutations.

It’s different story for async instrumentations — you can read about it here.

Data loss
Dropped samples can be approximated from next measurements.

Sending data
Application is constantly reporting cumulated value to collector.

Data reporting diagram

Example flow of requests based on example from OTEL Docs

Supported by e.g.:

  • Prometheus
  • LightStep

How to set metrics temporality?

We have 3 types of metrics temporality options that can be set in OpenTelemetry using OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE .

Possible options of metrics temporality preference:

  • Cumulative [default] — only cumulative temporality
  • Delta — mixed with details here
  • LowMemory — mixed with details here

Cummulative → Delta conversions

There is a built-in processor for this type of conversions thanks to cumulativetodeltaprocessor

Statfulness

This type of processing is stateful by nature. Delta amount has to be calculated from consecutive observations. It means that there is an issue with the first observation received into new instance of exporter, as we can’t calculate the delta.

You just have to be careful, as because it’s stateful, it has to be processed on the same collector instance. There is a solution for it. You can use loadbalancing exporters layer which finally supports both traces and metrics.

Out-of-order requests

Since each observation received affects all of the next observations it means that in case we would get out-of-order events. Those observations affects all items that came later. Usually observations are windowed to limit impact of such situations. Consider example below.

Histogram limitations

In some cases it might be impossible to deduce min and max values. Let’s consider max value based on the scenario below. It’s easy to deduce max value whenever it is higher than in previous observation. In case that maximum value reported is not higher than previous observations it might be any value from the min to max range.

--

--

Responses (1)