Kubernetes: dapr and distributed tracing with Azure Monitor

Microservices are the modern way of designing software architectures. A Microservice is simple and an independently deployable service that can scale on your needs. With regard to a monolithic architecture the interface layer has moved to the network. As a developer we are used to debugging with the call stack in a monolithic architecture. With Microservices these days are over because a call stack is only available within a process. But how do we debug across process boundaries? That is where distributed tracing comes in.

With ApplicationInsights, Azure Monitor offers a distributed tracing solution that makes a developer’s live easier. ApplicationInsights offers an application map view which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are.

Distributed tracing in dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. You can define exporters to export telemetry to an endpoint that can handle the OpenTelemetry format. dapr adds a HTTP/gRPC middleware to the dapr sidecar. The middleware intercepts all dapr and application traffic and automatically injects correlation IDs to trace distributed transactions.

In order to push telemetry to an instance of ApplicationInsights an agent that understands the telemetry format must be used to transform and push the data to ApplicationInsights. There is a component available named LocalForwarder that collects OpenCensus telemetry and routes it to ApplicationInsights. LocalForwarder is an open source project on GitHub.

Demo Architecture

I have created a demo architecture that shows how distributed tracing in dapr is configured and how telemetry is routed to ApplicationInsights. To keep it simple the application consists of four services. There are three backend services ServiceA, ServiceB and ServiceC. These services accepts http requests and returns a simple string. The fourth service is a simple Frontend that uses Swagger to render a simple UI. The Frontend service makes calls to the backend services.

After the application is deployed to Kubernetes and some test data is generated, the application map of ApplicationInsights can be viewed.

Demo application on GitHub

The demo application is available on my GitHub repository. The repository contains a detailed description how to setup distributed tracing in dapr on Kubernetes.

https://github.com/AndreasM009/dapr-distributed-tracing-azure-monitor

Kubernetes: Producer Consumer pattern with scalable consumer using dapr, KEDA and Azure ServiceBus Queues

I think every architect or developer knows about the consumer and producer pattern. This pattern is used to create jobs that can be processed in the background asynchronously. In that context the Producer creates the jobs that must be processed by the Consumer. To store the job description, the Producer typically uses a message queue. In a Cloud environment there are a lot of different Q techniques available. RabbitMQ, Redis or Azure ServiceBus Queue to name just a few. Normally as a architect or developer you choose one technique and use the appropriate integration library in your code. You have to know how the integration library works and you have to ensure that the library is available for your development platform. With a change to another Q technology, the code must also always be adapted. Getting used to an integration library can sometimes be hard and requires additional work for your developers.

dapr is an event-driven, portable runtime for building microservices on cloud and edge. dapr changes the way how you build event-driven microservices.

In dapr you can use output and input bindings to send message to and receive messages from a queue. When you decide on a queue technique like Redis, RabbitMQ or Azure ServiceBus Queues, you usually have to use integration libraries for binding in your code.With dapr you can integrate input and output bindings on a higher abstraction level and you don’t need to know how the integration library works.

With dapr you can integrate queues independent from the underlying technology. But how about scaling. Sometimes you want to scale out the consumer depending on the number of messages in the queue. To achieve this you can use KEDA in Kubernetes.

KEDA allows for fine grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.

Sample architecture

See it in action

To see Dapr and KEDA in action I have created a GitHub repository that guides you through setting up the described architecture above.

https://github.com/AndreasM009/dapr-keda-azsbqueue