Menu
Grafana Cloud

Pipeline export injection

Pipelines in Grafana Fleet Management can leverage the Alloy export block to share their components with other pipelines. This powerful feature enables you to create dynamic configurations that adapt to each pipeline’s exported values, allowing for flexible and reusable pipeline configurations.

How it works

When a pipeline is configured with Fleet Management, it can export various values that can be used across different components. These exports can be injected into configuration pipelines using the syntax argument.pipeline_exports.value["PIPELINE_NAME"]["EXPORT_NAME"], where "PIPELINE_NAME" is the user-designated name of the pipeline doing the exporting and "EXPORT_NAME" is the argument name assigned in the export block. This feature allows you to:

  • Share values between different pipelines.
  • Create reusable pipeline configurations.
  • Reduce configuration duplication.

Note

Make sure pipelines that define exports are always assigned pipeline matchers that are a superset of any pipeline that depends on them. For example, if a dependent pipeline is assigned the matcher collector.os = "linux", the exporting pipeline must have an assigned matcher that includes collector.os = "linux", such as collector.os =~ ".*". If the matchers of exporting pipelines aren’t inclusive of dependents, the dependent configuration pipelines could be incomplete and fail to load in Alloy.

Example: Common relabeling rules and remote write

In this example, we create a pipeline that exposes a common relabeling and remote write configuration to other pipelines. This pipeline helps control shared behavior in a single configuration.

Note

All collectors which use these example pipelines should configure a collector attribute named department to take advantage of attribute injection as shown in this example.

The following pipeline demonstrates how to define and export the common functionality:

alloy
// pipeline name: metrics_receiver
// matching attributes: collector.os =~ ".*"

prometheus.relabel "example" {
    forward_to = [prometheus.remote_write.example.receiver]

    rule {
        action        = "replace"
        target_label  = "department"
        replacement   = argument.attributes.value["department"]
    }
}

prometheus.remote_write "example" {
  endpoint {
    url = "<URL>"

    basic_auth {
      username = "<USERNAME>"
      password = "<PASSWORD>"
    }
  }
}

export "receiver" {
    value = prometheus.relabel.example.receiver
}

The following pipeline demonstrates how to leverage the common functionality from another pipeline:

alloy
// pipeline name: alloy_telemetry
// matching attributes: collector.os =~ ".*"

prometheus.exporter.self "example" { }

prometheus.scrape "example" {
    targets = prometheus.exporter.self.example.targets
    forward_to = [argument.pipeline_exports.value["metrics_receiver"]["receiver"]]
    job_name   = "integrations/alloy"
}

And a third pipeline for node exporter metrics shares the same functionality:

alloy
// pipeline name: node_exporter
// matching attributes: collector.os = "linux"

prometheus.exporter.unix "example" { }

prometheus.scrape "example" {
    targets = prometheus.exporter.unix.example.targets
    forward_to = [argument.pipeline_exports.value["metrics_receiver"]["receiver"]]
    job_name   = "integrations/node"
}

This set of configurations:

  1. Defines a common relabel rule and remote write.
  2. Scrapes alloy telemetry and forwards it to the common relabel receiver.
  3. Scrapes node exporter telemetry and forwards it to the common relabel receiver.