This post is part 3 of a series on .NET distributed tracing and OpenTelemetry, showing how to use the OpenTelemetry Protocol and the OpenTelemetry Collector to brilliantly connect .NET (or other languages) to Azure Monitor.

We have already seen in part 1 how distributed tracing is supported in .NET via W3C Trace Context propagation, with automatic (or mostly automatic) support across HttpClient calls and messaging, and in part 2 how to add automatic instrumentation with OpenTelemetry and view traces in Jaeger.

We will now look at how the OpenTelemetry Protocol (OTLP) and the OpenTelemetry Collector will reduce instrumentation complexity and lead to a brilliant connected future.

With your OpenTelemetry data sent to Azure Monitor and Azure Application Insights, you can easily analyse performance across the components of your application, and generate architectural diagrams such as the Application map.

Azure Monitor application map

Dealing with complexity

The standards include the OpenTelemetry Protocol (OTLP), which will simplify connections and provide interoperability. Many vendors already support OTLP, although the full specification is not yet standardised.

In the meanwhile, the OpenTelemetery collector can be used to simplify your configuration by handling OLTP and forwarding telemetry to different destinations.

OpenTelemetry collector

Using a collector also has other benefits, as it can be used for local batching, sampling, and processing pipelines before forwarding events upstream.

The collector is written in Go, and can be run in a Docker container. It supports logs, traces, and metrics.

Aside: Without the collector

Using separate exporters and protocols for each destination, for each language (even if they all conform to the OpenTelemetry standards) is a many-to-many problem that we want to avoid.

Many-to-many tracing complexity

Until all destinations support OTLP, we can avoid this by using the collector (above), which has more plug-ins available than individual language libraries.

Configuring the OTLP exporter

These example are also available in Github: https://github.com/sgryphon/dotnet-distributed-tracing-examples

The collector is configured to export to Azure Monitor / Application Insights. Docker compose is also used to run Jaeger as a destination for traces and Loki + Grafana as a destination for logging.

When running the example the OpenTelemetry collector is then run directly in Docker.

To configure the basic application, first, add the packages for the OTLP exporter and OTLP logs exporter.

dotnet add Demo.WebApp package OpenTelemetry.Exporter.OpenTelemetryProtocol --prerelease
dotnet add Demo.WebApp package OpenTelemetry.Exporter.OpenTelemetryProtocol.Logs --prerelease

dotnet add Demo.Service package OpenTelemetry.Exporter.OpenTelemetryProtocol --prerelease
dotnet add Demo.Service package OpenTelemetry.Exporter.OpenTelemetryProtocol.Logs --prerelease

dotnet add Demo.Worker package OpenTelemetry.Exporter.OpenTelemetryProtocol --prerelease
dotnet add Demo.Worker package OpenTelemetry.Exporter.OpenTelemetryProtocol.Logs --prerelease

To configure the OTLP exporter requires the endpoint, processor type, and protocol; you can add these to the appsettings file for the environment, e.g. appsettings.Development.json. For example, to send to a local collector (running in Docker):

"OpenTelemetry": {
  "OtlpExporter": {
    "Endpoint": "http://localhost:4317/",
    "ExportProcessorType": "Batch",
    "Protocol": "grpc"
  }
}

Then, instead of the destination-specific Jaeger exporter, use the OtlpExporter when configuring OpenTelemetry tracing, binding the configuration to the appsettings section.

builder.Services.AddOpenTelemetryTracing(configure =>
{
    configure
        .SetResourceBuilder(resourceBuilder)
        .AddAspNetCoreInstrumentation()
        .AddHttpClientInstrumentation()
        .AddNpgsql()
        .AddSource("MassTransit")
        .AddOtlpExporter(otlpExporterOptions =>
        {
            builder.Configuration.GetSection("OpenTelemetry:OtlpExporter")
                .Bind(otlpExporterOptions);
        });
});

For OpenTelemetry logging, you also need to configure the OpenTelemetry logging provider. This needs the resource and the exporter, but no additional instrumentation as just hooks into the standard ILogger<T> pipeline.

When configuring the OpenTelemetry logger, you can enable formatted messages, scopes, and state values.

builder.Logging
    .AddOpenTelemetry(configure =>
    {
        configure
            .SetResourceBuilder(resourceBuilder)
            .AddOtlpExporter(otlpExporterOptions =>
            {
                builder.Configuration.GetSection("OpenTelemetry:OtlpExporter").Bind(otlpExporterOptions);
            });
        configure.IncludeFormattedMessage = true;
        configure.IncludeScopes = true;
        configure.ParseStateValues = true;
    });

As before, no change to application code is needed. Setting up the OTLP exporter is entirely done in the application configuration.

Azure Monitor preparation

You need to set up Azure Monitor and Application Insights instances to send telemetry to. The Github project includes a script to create the necessary Azure resources.

az login
$VerbosePreference = 'Continue'
./deploy-infrastructure.ps1

You can also create them directly in PowerShell using the Azure CLI:

az login

$OrgId = "0x$($(az account show --query id --output tsv).Substring(0,4))",
$Environment = 'Dev',
$Location = 'australiaeast'

$appName = 'tracedemo'
$rgName = "rg-$appName-$Environment-001".ToLowerInvariant()

$TagDictionary = @{ WorkloadName = 'tracingdemo'; DataClassification = 'Non-business'; Criticality = 'Low';
  BusinessUnit = 'Demo'; ApplicationName = $appName; Env = $Environment }
$tags = $TagDictionary.Keys | ForEach-Object { $key = $_; "$key=$($TagDictionary[$key])" }

az group create -g $rgName -l $location --tags $tags

$logName = "log-$appName-$Environment".ToLowerInvariant()
$appiName = "appi-$appName-$Environment".ToLowerInvariant()

az monitor log-analytics workspace create `
  --resource-group $rgName `
  -l $Location `
  --workspace-name $logName `
  --tags $tags

az monitor app-insights component create `
  --app $appiName `
  -g $rgName `
  --location $Location `
  --workspace $logName `
  --tags $tags

After creation you can log in to the Azure portal to check the logging components were created at https://portal.azure.com

Azure resource cleanup

When you are finished, the demo resources can be removed:

az login
./remove-infrastructure.ps1

Collector configuration

When the OpenTelemetry collector is run it uses a configuration file that directs how telemetry is routed.

This configuration specifies the receiver plugs to use and ports to listen to (OTLP), pipeline processors, and the exporters to send to (auzuremonitor, jaeger, and loki). Note that the azuremonitor configuration includes an environment variable, which is set from the deployed instance.

For local development, insecure connections are used (as the local endpoints don't have certificates set up).

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  azuremonitor:
    instrumentation_key: "${AZ_INSTRUMENTATION_KEY}"
  jaeger:
    endpoint: jaeger:14250
    tls:
      insecure: true
  logging:
    logLevel: info
  loki:
    endpoint: http://loki:3100/loki/api/v1/push
    format: json
    labels:
      resource:
        deployment.environment: "deployment_environment"
        host.name: "host_name"
        service.name: "service_name"
        service.namespace: "service_namespace"
      record:
        severity: "severity"
    tenant_id: tenant1
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, jaeger, azuremonitor]
    logs:
      receivers: [otlp]
      processors: []
      exporters: [logging, loki, azuremonitor]

Run the example

As the example includes many components, there is a helper shell script start-collector-demo.sh that uses tmux to partition the terminal and start all of the individual components running.

./start-collector-demo.sh

Example of running system:

Collector example running in tmux

Running components individually

You can also run each of the components in separate shell terminals (the Github code has PowerShell versions of these commands):

OpenTelemetry collector

az_instrumentation_key=$(az monitor app-insights component show -a appi-tracedemo-dev -g rg-tracedemo-dev-001 -o tsv --query instrumentationKey)
docker run -it --rm -e AZ_INSTRUMENTATION_KEY=$az_instrumentation_key --network demo_default -p 4317:4317 -v $PWD/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml otel/opentelemetry-collector-contrib:0.50.0

Web interface

ASPNETCORE_URLS="http://localhost:8002"
npm run start --prefix Demo.WebApp/ClientApp

Web app

version=$(dotnet gitversion /output json /showvariable InformationalVersion)
dotnet run --project Demo.WebApp -p:InformationalVersion=$version -- --urls http://*:8002 --environment Development

Service

version=$(dotnet gitversion /output json /showvariable InformationalVersion)
dotnet run --project Demo.Service -p:InformationalVersion=$version -- --urls https://*:44301 --environment Development

Worker

version=$(dotnet gitversion /output json /showvariable InformationalVersion)
dotnet run --project Demo.Worker -p:InformationalVersion=$version -- --environment Development

See results in Azure Monitor

Logging (and trace details)

Logs are forwarded to Application Insights, for viewing in the Azure Monitor Log Analytics workspace. From the workspace, select General > Logs, and then use the following query to see your logs:

union AppDependencies, AppRequests, AppTraces, AppExceptions
| where TimeGenerated  > ago(1h)
| sort by TimeGenerated desc
| project TimeGenerated, OperationId, SeverityLevel, Message, Type, AppRoleName, Properties.SpanId, ParentId

Example output:

Azure Monitor logging

End-to-end transaction details

The telemetry also includes trace information, which can be used to analyse dependencies and timings in the Application Insights end-to-end transaction details.

In Application Insights, select Investigate > Performance in the left hand menu, then Operation Name "GET WeatherForecast/Get" (the top level operation requesting the page). The right hand side will show the instances. Click on "Drill into... N Samples" in the bottom right, then select the recent operation, to see the End-to-end transaction.

Azure Monitor end-to-end transaction

Application map

Application Insights can also generate an application map (screen shot at the top of this article) showing the tracing relationships between components, by selecting Investigate > Application Map in the left hand menu.

Next steps

Plan to move your existing applications across to OpenTelemetry and OLTP as support for it increases. Right now you can use the OpenTelemetry libraries and OpenTelemetry Collector to forward logs, traces, and metrics, from many applications into Azure Monitor.

In your local isolated development environment you can use Jaeger for tracing (detailed in the previous post). The full Github example, linked above, also has the OpenTelemetry Collector sending logs to Loki + Grafana for local logging. For local development you can easily run the needed services in Docker, so that everyone on your team has the same set up.

You can further enhance your instrumentation by using ActivitySource to create your own spans around different parts of of our code that do not have automatic instrumention (and then register your source in the OpenTelemetry configuration).

OpenTelemetry also supports metrics, not covered in this article, which are also useful for analysing application performance.


Thumbnail picture from: https://pixabay.com/illustrations/pulse-trace-healthcare-medicine-163708/