Tried using a context propagator, but it passes the data in the header for the entire schedule and uses the same headers for every workflow. Additionally, we need to explicitly add those headers to workflows for them to appear in traces. Also tried using the memo in the schedule definition, but it didn't work as expected.
As a workaround, we manually created traces, set attributes, and assigned tracking identifiers to workflow and activity traces. This way, if there are four workflows running in one schedule at a time, there will be four separate identifiers for the schedule, allowing us to track the workflows and activities independently.
- Run a Temporal service.
One way could be just to use the Temporal CLI.
temporal server start-dev- Run the following command to start the worker
go run opentelemetry/worker/main.go- In another terminal, run the following command to run the workflow
go run opentelemetry/starter/main.goThe example outputs the traces in the stdout, both the worker and the starter.
If all is needed is to see Workflows and Activities there's no need to set up instrumentation for the Temporal cluster.
In order to send the traces to a real service you need to replace
exp, err := stdouttrace.New(stdouttrace.WithPrettyPrint())
if err != nil {
log.Fatalln("failed to initialize stdouttrace exporter", err)
}with
// Configure a new OTLP exporter using environment variables for sending data to Honeycomb over gRPC
clientOTel := otlptracegrpc.NewClient()
exp, err := otlptrace.New(ctx, clientOTel)
if err != nil {
log.Fatalf("failed to initialize exporter: %e", err)
}And provide the required additional parameters like the OTLP endpoint.
For many services that would mean just to set the standard OTeL env vars like:
OTEL_SERVICE_NAME
OTEL_EXPORTER_OTLP_ENDPOINT
OTEL_EXPORTER_OTLP_HEADERS
As an example this is what is the rendered by Honeycomb.io.
