Understanding the Components of OpenTelemetry: API/SDK, Receiver, Processor, and Exporter
Introduction
OpenTelemetry (OTel) is an open-source observability framework that enables the collection, processing, and exporting of traces, metrics, and logs from applications and infrastructure. It provides standardized APIs, SDKs, and tools to ensure seamless instrumentation, processing, and analysis of observability data.
In this article, we will explore the four key components of OpenTelemetry:
API/SDK – For application instrumentation.
Receiver – For collecting data from various sources.
Processor – For modifying and enriching observability data.
Exporter – For sending data to observability backends like Prometheus, Jaeger, and AWS X-Ray.
Each component plays a crucial role in ensuring scalability, flexibility, and vendor neutrality in observability pipelines.
1️⃣ API/SDK: The Foundation of OpenTelemetry Instrumentation
What is OpenTelemetry API/SDK?
The API provides the interface for instrumenting applications with tracing, metrics, and logs.
The SDK provides the implementation of the API, allowing developers to collect and process telemetry data before exporting it.
How API/SDK Works
Developers add OpenTelemetry API calls to instrument their code.
OpenTelemetry SDK processes the data (e.g., sampling, batching).
The processed data is then sent to a Receiver or directly to an Exporter.
Key Components of OpenTelemetry API/SDK
✅ Traces API: Captures requests, dependencies, and spans across microservices.
✅ Metrics API: Records performance indicators like response time, memory usage, CPU load.
✅ Logs API: Captures structured event data for debugging and auditing.
Example: OpenTelemetry SDK in Python
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Initialize Tracer
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
# Configure Exporter
span_exporter = OTLPSpanExporter(endpoint="http://otel-collector:4317")
trace.get_tracer_provider().add_span_processor(BatchSpanProcessor(span_exporter))
# Start a Trace
with tracer.start_as_current_span("user-login"):
print("User login event tracked")
This example captures a user login event, generates a trace, and sends it to OpenTelemetry Collector.
2️⃣ Receiver: Collecting Data from Different Sources
What is a Receiver?
A Receiver is the entry point of OpenTelemetry Collector that ingests telemetry data from various sources.
It supports multiple protocols like OTLP, Jaeger, Zipkin, Prometheus, and FluentBit.
How Receivers Work
Application SDKs or third-party agents send data to OpenTelemetry Collector.
Receivers ingest the data based on predefined configurations.
The collected data is passed to Processors for further refinement.
Types of OpenTelemetry Receivers
✅ OTLP Receiver: Collects OpenTelemetry Protocol (OTLP) data from SDKs.
✅ Jaeger Receiver: Accepts spans from Jaeger agents or clients.
✅ Zipkin Receiver: Collects traces from Zipkin-instrumented applications.
✅ Prometheus Receiver: Scrapes metrics from Prometheus endpoints.
✅ FluentBit Receiver: Ingests logs from FluentBit log forwarders.
Example: Configuring a Receiver in OpenTelemetry Collector
receivers:
otlp:
protocols:
grpc:
http:
prometheus:
config:
scrape_configs:
- job_name: "my-app"
static_configs:
- targets: ["my-app-service:9090"]
This configuration collects traces via OTLP and metrics from a Prometheus endpoint.
3️⃣ Processor: Transforming and Enhancing Observability Data
What is a Processor?
A Processor modifies, filters, or enriches telemetry data before it is sent to Exporters.
It optimizes data flow by reducing redundancy and adding metadata.
How Processors Work
Receivers collect data and pass it to Processors.
Processors apply transformations, filtering, batching, or sampling.
Processed data is sent to Exporters for storage or visualization.
Common Types of OpenTelemetry Processors
✅ Batch Processor: Groups telemetry data before sending to Exporters (reduces network overhead).
✅ Memory Limiter Processor: Ensures Collector does not exceed memory limits.
✅ Filtering Processor: Removes unnecessary spans, logs, or metrics.
✅ Attributes Processor: Adds custom labels or metadata to traces and logs.
Example: Configuring a Processor in OpenTelemetry Collector
processors:
batch:
timeout: 5s
attributes:
actions:
- key: "environment"
value: "production"
action: insert
This configuration batches data every 5 seconds and adds an 'environment' attribute.
4️⃣ Exporter: Sending Data to Backends for Storage and Visualization
What is an Exporter?
An Exporter is responsible for sending processed telemetry data to a backend like Jaeger, Prometheus, Datadog, AWS X-Ray, or Elasticsearch.
Exporters define where and how telemetry data is stored or analyzed.
How Exporters Work
Receivers collect telemetry data.
Processors refine and modify the data.
Exporters send the final data to external storage or monitoring tools.
Common OpenTelemetry Exporters
✅ Jaeger Exporter: Sends traces to Jaeger.
✅ Prometheus Exporter: Pushes metrics to Prometheus.
✅ Elasticsearch Exporter: Stores logs and traces in Elasticsearch.
✅ AWS X-Ray Exporter: Integrates with AWS monitoring tools.
Example: Configuring an Exporter in OpenTelemetry Collector
exporters:
jaeger:
endpoint: jaeger:14250
prometheus:
endpoint: "0.0.0.0:8889"
This configuration sends traces to Jaeger and metrics to Prometheus.
Conclusion
OpenTelemetry provides a powerful, standardized observability pipeline with the following components: ✅ API/SDK – Instruments applications for trace, metric, and log generation.
✅ Receiver – Collects telemetry data from applications and third-party tools.
✅ Processor – Modifies, enriches, and batches observability data.
✅ Exporter – Sends final data to monitor and storage backends.
By integrating these components, organizations can achieve full-stack observability in Kubernetes, microservices, and cloud-native environments. 🚀