As per a recent CNCF Blog post, Kubernetes and OpenTelemetry are the 1st and 2nd Open Source projects in terms of Project Velocity. Since it’s launch over 10 years ago, Kubernetes® has become the standard platform in the software industry for managing containerized applications across a cluster of servers. For newcomers to the observability domain, OpenTelemetry™ provides a standard way to collect telemetry data (metric, logs and traces) from software applications and infrastructure and send it to one or more backends to analyze performance. The backends can be open source (Jaeger or Zipkin, for example), commercial (such as Splunk AppDynamics, Splunk Observability) or both.
To enable faster adoption and showcase instrumentation best practices, the OTel community has built a demo application, OpenTelemetry Community Demo. In this blog, I'll show how to configure the Kubernetes deployment of OpenTelemetry demo to send Trace data to Splunk AppDynamics for further analysis. If you are interested in observing Docker compose deployment of OpenTelemetry Demo application in Splunk AppDynamics then please refer to this other article.
Splunk AppDynamics provides full stack observability of hybrid and on-prem applications and their impact on business performance. In addition to proprietary ingestion format, AppDynamics also supports OpenTelemetry trace ingestion from various language agents (Java, dotnet, python, golang etc.) giving customers more options in how they want to ingest telemetry data.
OpenTelemetry Community Demo is a simulated version of an eCommerce store selling astronomy equipment. The app consists of 14+ microservices communicating with each other via http or grpc. The microservices are built using a variety of programming languages (Java, Javascript, C#, etc.) and instrumented using OpenTelemetry (auto, manual or both). The diagram below shows the data flow and programming languages used.
(Image credit: OpenTelemetry Demo contributors.)
In addition to the microservices shown here, the demo app also comes with supporting components such as OpenTelemetry Collector, Grafana, Prometheus and Jaeger to export and visualize traces, metrics and so on. The OpenTelemetry Collector is highly configurable. Once exporters for various backends are defined and enabled in the service pipeline, the Collector can be set up to send telemetry data to multiple backends simultaneously. The diagram below shows the OTel demo with supporting components, as well as a dotted line to Splunk AppDynamics, which we will configure in the next section.
brew install kind
kind create cluster --name otel-demo
kubectl get pods -A
brew install helm
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install my-otel-demo open-telemetry/opentelemetry-demo
kubectl port-forward svc/my-otel-demo-frontend-proxy 8080:8080
opentelemetry-collector:
config:
processors:
resource:
attributes:
- key: appdynamics.controller.account
action: upsert
value: "from AppD account url > Otel > Configuration > Processor section"
- key: appdynamics.controller.host
action: upsert
value: "from AppD account url > Otel > Configuration > Processor section"
- key: appdynamics.controller.port
action: upsert
value: 443
- key: service.namespace
action: upsert
value: appd-otel-demo-k8s-kind-mac #custom name for your App
batch:
timeout: 30s
send_batch_size: 90
exporters:
otlphttp/appdynamics:
endpoint: "from AppD account url > Otel > Configuration > Exporter section"
headers: {"x-api-key": "from AppD account url > Otel > Configuration > API Key"}
service:
pipelines:
traces:
receivers: [otlp]
processors: [resource, batch]
exporters: [otlp, spanmetrics, otlphttp/appdynamics]
helm uninstall my-otel-demo
helm install appd-otel-demo open-telemetry/opentelemetry-demo --values otel-col-appd.yaml
kubectl port-forward svc/my-otel-demo-frontend-proxy 8080:8080
The OpenTelemetry Community Demo application is a valuable and safe tool for learning about OpenTelemetry and instrumentation best practices. In this blog, we showed how to configure the K8s deployment of demo app to send telemetry data to Splunk AppDynamics. We also explored some key Splunk AppDynamics features such as FlowMap, APM metrics, and an observed increase in error rates via a fault-injection scenario.