Push Consumers

Use this section to find example declarations and notes for supported push-based consumers. See Pull Consumers for pull-based consumers.

Important

Each of the following examples shows only the Consumer class of a declaration and must be included with the rest of the base declaration (see Components of the declaration).

Splunk

Splunk

Required information:
  • Host: The address of the Splunk instance that runs the HTTP event collector (HEC).
  • Protocol: Check if TLS is enabled within the HEC settings Settings > Data Inputs > HTTP Event Collector.
  • Port: Default is 8088, this can be configured within the Global Settings section of the Splunk HEC.
  • API Key: An API key must be created and provided in the passphrase object of the declaration, refer to Splunk documentation for the correct way to create an HEC token.

If you want to specify proxy settings for Splunk consumers in TS 1.17 and later, see the Splunk Proxy example.

Note

When using the custom endpoints feature, be sure to include /mgmt/tm/sys/global-settings in your endpoints for Telemetry Streaming to be able to find the hostname.

NEW in TS 1.19
Be sure to see Memory usage spikes in the Troubleshooting section for information on the compressionType property introduced in TS 1.19. When set to none, this property stops TS from compressing data before sending it to Splunk, which can help reduce memory usage.

Example Declaration:

Important

This example has been updated with the compressionType property introduced in TS 1.19. If you are using a previous version of Telemetry Streaming, this declaration will fail. When using a previous version, remove the compressionType line and the preceding comma (highlighted in yellow).

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Splunk",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 8088,
        "passphrase": {
            "cipherText": "apikey"
        },
        "compressionType": "gzip"
    }
}

Splunk Legacy format (Deprecated)

Important

The Splunk Legacy format has been deprecated as of Telemetry Streaming 1.17, and has entered maintenance mode. This means there will be no further TS development for the Splunk Legacy format.
We recommend using the Splunk default format, or Splunk multi-metric format (currently experimental).

The format property can be set to legacy for Splunk users who wish to convert the stats output similar to the F5 Analytics App for Splunk. To see more information, see F5 Analytics iApp Template documentation. To see more information about using the HEC, see Splunk HTTP Event Collector documentation. See the following example.

To poll for any data involving tmstats you must have a Splunk consumer with the legacy format as described in this section. This includes GET requests to the SystemPoller API because the data is not pulled unless it is a legacy Splunk consumer.

Telemetry Streaming 1.7.0 and later gathers additional data from tmstats tables to improve compatibility with Splunk Legacy consumers.

In Telemetry Streaming v1.6.0 and later, you must use the facility parameter with the legacy format to specify a Splunk facility in your declarations. The facility parameter is for identification of location/facility in which the BIG-IP is located (such as ‘Main Data Center’, ‘AWS’, or ‘NYC’).

If a Splunk Consumer is configured with the legacy format, then it ignores events from Event Listener.

Required information for facility:
  • The facility parameter must be inside of actions and then setTag as shown in the example.
  • The value for facility is arbitrary, but must be a string.
  • The locations property must include "system": true, as that is where facility is expected.
  • The value for facility is required when the format is legacy (required by the Splunk F5 Dashboard application; a declaration without it will still succeed)

Example Declaration for Legacy (including facility):

{
    "class": "Telemetry",
      "My_System": {
          "class": "Telemetry_System",
          "systemPoller": {
            "interval": 60,
            "actions": [
              {
                "setTag": {
                  "facility": "facilityValue"
                },
                "locations": {
                  "system": true
                }
              }
            ]
          }
      },
      "My_Consumer": {
          "class": "Telemetry_Consumer",
          "type": "Splunk",
          "host": "192.0.2.1",
          "protocol": "https",
          "port": 8088,
          "passphrase": {
              "cipherText": "apikey"
          },
          "format": "legacy"
      }
  }

Splunk multi-metric format

Warning

Splunk multi-metric format is currently EXPERIMENTAL. It requires Splunk version 8.0.0 or later.

Telemetry Streaming 1.17 introduces the ability to use Splunk multi-metric format (currently experimental) for Splunk 8.0.0 and later. Splunk multi-metric format allows each JSON object to contain measurements for multiple metrics, which generate multiple-measurement metric data points, taking up less space on disk and improving search performance.

See the Splunk documentation for more information.

Important

Only canonical (default) system poller output is supported. Custom endpoints are NOT supported with the multi-metric format.

To use this feature, the format of the Splunk Telemetry_Consumer must be set to multiMetric as shown in the example.

Example Declaration for Splunk multi-metric:

{
    "class": "Telemetry",
      "My_System": {
          "class": "Telemetry_System",
          "systemPoller": {
            "interval": 60,
            "actions": [
              {
                "setTag": {
                  "facility": "facilityValue"
                },
                "locations": {
                  "system": true
                }
              }
            ]
          }
      },
      "My_Consumer": {
          "class": "Telemetry_Consumer",
          "type": "Splunk",
          "host": "192.0.2.1",
          "protocol": "https",
          "port": 8088,
          "passphrase": {
              "cipherText": "apikey"
          },
          "format": "multiMetric"
      }
  }

Microsoft Azure Log Analytics

Microsoft Azure

Required Information:
  • Workspace ID: Navigate to Log Analytics workspace > Advanced Settings > Connected Sources.
  • Shared Key: Navigate to Log Analytics workspace > Advanced Settings > Connected Sources and use the primary key.

Important

The Azure Log Analytics Consumer only supports sending 500 items. Each configuration item (such as virtual server, pool, node) uses part of this limit.

Format property

Telemetry Streaming 1.24 adds the format property for Azure Log Analytics. This was added to reduce the number of columns in the output which prevents a potential Azure error stating Data of Type F5Telemetry was dropped because number of field is above the limit of 500.

The values for format are:

  • default - This is the default value, and does not change the behavior from previous versions. In this mode, each unique item gets a set of columns. With some properties such as Client and Server SSL profiles, the number of columns exceeds the maximum allowed by Azure.
    For example, with a CA bundle certificate, there may be fields for expirationDate, expirationString, issuer, name, and subject. TS creates a column named ca-bundle_crt_expirationDate and four additional columns for the other four properties. The name value is a prefix for every column.
  • propertyBased - This value causes Telemetry Streaming to create fewer columns by using the property name for the column. In the example above, the column (property) name is just expirationDate, and all certificates use this column for the expiration dates.
    Note this happens only if the property name exists, and it matches the declared object name at the top. Otherwise, the naming mode goes back to default.

To see more information about sending data to Log Analytics, see HTTP Data Collector API documentation.

Region property

The region property for Azure Log Analytics and Application Insights was added in part to support the Azure Government regions.

  • This optional property is used to determine cloud type (public/commercial, govcloud) so that the correct API URLs can be used (example values: westeurope, japanwest, centralus, usgovvirginia, and so on).
  • If you do not provide a region, Telemetry Streaming attempts to look it up from the instance metadata.
  • If it is unable to extract metadata, TS defaults to public/commercial
  • Check the Azure Products Available by Region for product/region compatibility for Azure Government.
  • See the Azure documentation for a valid list of regions (resource location), and Region list for example values from the Azure CLI.

Note

The following example has been updated with the useManagedIdentity, region, and format properties. You must be using a TS version that supports these properties (TS 1.24 for format)
See Using Managed Identities following the example for information about using Azure Managed Identities and Telemetry Streaming.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Azure_Log_Analytics",
        "workspaceId": "workspaceid",
        "passphrase": {
            "cipherText": "sharedkey"
        },
        "useManagedIdentity": false,
        "region": "westus",
        "format": "propertyBased"
    }
}

Example Dashboard:

The following is an example of the Azure dashboard with Telemetry Streaming data. To create a similar dashboard, see Azure dashboard. To create custom views using View Designer, see Microsoft documentation.

azure_log_analytics_dashboard


Using Microsoft Managed Identities for Log Analytics

Telemetry Streaming v1.11 adds support for sending data to Azure Log Analytics with an Azure Managed Identity. For specific information on Managed Identities, see Microsoft documentation.

Important: The managed identity assigned to the VM must have at the minimum, the following permissions (see the Azure documentation for detailed information):

  • List subscriptions
  • List workspaces for the subscription(s)
  • Log Analytics Contributor for the workspace (either at the Workspace resource level or inherited via resource group)

Telemetry Streaming supports Managed Identities using a new useManagedIdentity property, set to true. You cannot specify a passphrase when this property is set to true. You must specify passphrase when this property is omitted or when value is false. If you do not include this property at all, Telemetry Streaming behaves as though the value is false.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Azure_Log_Analytics",
        "workspaceId": "workspaceid",
        "useManagedIdentity": true
    }
}

Microsoft Azure Application Insights

Microsoft Azure

Required Information:

  • Instrumentation Key: If provided, Use Managed Identity must be false or omitted (default). Navigate to Application Insights > {AppinsightsName} > Overview
  • Use Managed Identity: If true, Instrumentation Key must be omitted. See Managed Identities for App Insight.

Optional Properties:

  • MaxBatch Size: The maximum number of telemetry items to include in a payload to the ingestion endpoint (default: 250)
  • Max Batch Interval Ms: The maximum amount of time to wait in milliseconds to for payload to reach maxBatchSize (default: 5000)
  • App Insights Resource Name: Name filter used to determine to which App Insights resource to send metrics. If not provided, TS will send metrics to App Insights in the subscription in which the managed identity has permissions. Note: To be used only when useManagedIdentity is true.
  • customOpts: Additional options for use by consumer client library. These are passthrough options (key value pair) to send to the Microsoft node client.

Warning

The customOpts options are not guaranteed to work and may change according to the client library API; you must use these options with caution. Refer to corresponding consumer library documentation for acceptable keys and values.

To see more information about Azure Application Insights, see Microsoft documentation.

Region property

Telemetry Streaming v1.11 adds the region property for Azure Log Analytics and Application Insights. This is in part to support the Azure Government regions.

  • This optional property is used to determine cloud type (public/commercial, govcloud) so that the correct API URLs can be used (example values: westeurope, japanwest, centralus, usgovvirginia, and so on).
  • If you do not provide a region, Telemetry Streaming attempts to look it up from the instance metadata.
  • If it is unable to extract metadata, TS defaults to public/commercial
  • Check the Azure Products Available by Region for product/region compatibility for Azure Government.
  • See the Azure documentation for a valid list of regions (resource location), and Region list for example values from the Azure CLI.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Azure_Application_Insights",
        "instrumentationKey": "app-insights-resource-guid",
        "maxBatchSize": 125,
        "maxBatchIntervalMs": 30000,
        "customOpts": [
            {
                "name": "samplingPercentage",
                "value": 75
            }
        ],
        "useManagedIdentity": false,
        "region": "usgovarizona"
    }
}

Using Microsoft Managed Identities for Application Insights

Telemetry Streaming v1.11 also adds support for sending data to Azure Application Insights with an Azure Managed Identity. For specific information on Managed Identities, see Microsoft documentation.

Important: The managed identity assigned to the VM must have at the minimum, the following permissions (see the Azure documentation for detailed information):

  • List Microsoft.Insight components for subscription(s), for example the Monitoring Reader role
  • Push metrics to the App Insights resource, for example the Monitoring Metrics Publisher role

Telemetry Streaming supports Managed Identities using a new useManagedIdentity property, set to true. You cannot specify an instrumentationKey when this property is set to true. You must specify instrumentationKey when this property is omitted or when the value is false. If you do not include this property at all, Telemetry Streaming behaves as though the value is false. You can optionally provide an appInsightsResourceName to limit which App Insights resource(s) to send metrics to. Without the filter, metrics will be sent to all App Insights resources to which the managed identity has permissions.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Azure_Application_Insights",
        "appInsightsResourceName": "app.*insights.*",
        "maxBatchSize": 125,
        "maxBatchIntervalMs": 30000,
        "customOpts": [
            {
                "name": "samplingPercentage",
                "value": 75
            }
        ],
        "useManagedIdentity": true
    }
}

AWS CloudWatch

Amazon Web Services

AWS CloudWatch has two consumers: CloudWatch Logs, and CloudWatch Metrics (new in TS 1.14). If you do not use the new dataType property, the system defaults to CloudWatch Logs.

Important

In TS 1.9.0 and later, the username and passphrase for CloudWatch are optional. This is because a user can send data from a BIG-IP that has an appropriate IAM role in AWS to AWS CloudWatch without a username and passphrase.

In TS 1.18 and later, the root certificates for AWS services are now embedded within Telemetry Streaming and are the only root certificates used in requests made to AWS services per AWS’s move to its own Certificate Authority, noted in https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/.

AWS CloudWatch Logs (default)

Required information:
  • Region: AWS region of the CloudWatch resource.
  • Log Group: Navigate to CloudWatch > Logs
  • Log Stream: Navigate to CloudWatch > Logs > Your_Log_Group_Name
  • Username: Navigate to IAM > Users
  • Passphrase: Navigate to IAM > Users

To see more information about creating and using IAM roles, see the AWS Identity and Access Management (IAM) documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "AWS_CloudWatch",
        "region": "us-west-1",
        "logGroup": "f5telemetry",
        "logStream": "default",
        "username": "accesskey",
        "passphrase": {
            "cipherText": "secretkey"
        }
    }
}

AWS CloudWatch Metrics

Telemetry Streaming 1.14 introduced support for AWS CloudWatch Metrics. To specify CloudWatch Metrics, use the new dataType property with a value of metrics as shown in the example.

Notes for CloudWatch Metrics:
  • It can take up to 15 minutes for metrics to appear if they do not exist and AWS has to create them.
  • You must use the dataType property with a value of metrics, if you do not specify metrics, the system defaults to Logs.
  • Some properties are restricted so that you cannot use them with the wrong dataType (for example, you cannot use logStream when dataType: “metrics”)
Required Information:
  • Region: AWS region of the CloudWatch resource.
  • MetricNamespace: Namespace for the metrics. Navigate to CloudWatch > Metrics > All metrics > Custom Namespaces
  • DataType: Value should be metrics
  • Username: Navigate to IAM > Users
  • Passphrase: Navigate to IAM > Users

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "AWS_CloudWatch",
        "dataType": "metrics",
        "metricNamespace": "a-custom_namespace",
        "region": "us-west-1",
        "username": "accesskey",
        "passphrase": {
            "cipherText": "secretkey"
        }
    }
}

AWS S3

Amazon Web Services

Required Information:
  • Region: AWS region of the S3 bucket.
  • Bucket: Navigate to S3 to find the name of the bucket.
  • Username: Navigate to IAM > Users
  • Passphrase: Navigate to IAM > Users

To see more information about creating and using IAM roles, see the AWS Identity and Access Management (IAM) documentation.

Important

In TS 1.12.0 and later, the username and passphrase for S3 are optional. This is because a user can send data from a BIG-IP that has an appropriate IAM role in AWS to AWS S3 without a username and passphrase.

In TS 1.18 and later, the root certificates for AWS services are now embedded within Telemetry Streaming and are the only root certificates used in requests made to AWS services per AWS’s move to its own Certificate Authority, noted in https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "AWS_S3",
        "region": "us-west-1",
        "bucket": "bucketname",
        "username": "accesskey",
        "passphrase": {
            "cipherText": "secretkey"
        }
    }
}

Graphite

Graphite

Required Information:
  • Host: The address of the Graphite system.
  • Protocol: Check Graphite documentation for configuration.
  • Port: Check Graphite documentation for configuration.

Note

To see more information about installing Graphite, see Installing Graphite documentation. To see more information about Graphite events, see Graphite Events documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Graphite",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443
    }
}

Kafka

Kafka

Required Information:
  • Host: The address of the Kafka system.
  • Port: The port of the Kafka system.
  • Topic: The topic where data should go within the Kafka system
  • Protocol: The port of the Kafka system. Options: binaryTcp or binaryTcpTls. Default is binaryTcpTls
  • Authentication Protocol: The protocol to use for authentication process. Options: SASL-PLAIN and None, and TLS in TS 1.17 and later. Default is None.
  • Username: The username to use for authentication process.
  • Password: The password to use for authentication process.

Note

To see more information about installing Kafka, see Installing Kafka documentation.

New in TS 1.17

Telemetry Streaming 1.17 and later adds the ability to add TLS client authentication to the Kafka consumer using the TLS authentication protocol. This protocol configures Telemetry Streaming to provide the required private key and certificate(s) when the Kafka broker is configured to use SSL/TLS Client authentication.

You can find more information on Kafka’s client authentication on the Confluent pages: https://docs.confluent.io/5.5.0/kafka/authentication_ssl.html.

There are 3 new properties on the Kafka consumer:

Property Description
privateKey The Private Key for the SSL certificate. Must be formatted as a 1-line string, with literal new line characters.
clientCertificate The client certificate chain. Must be formatted as a 1-line string, with literal new line characters.
rootCertificate The Certificate Authority root certificate, used to validate the client certificate. Certificate verification can be disabled by setting allowSelfSignedCert=true. Must be formatted as a 1-line string, with literal new line characters.

Important

The following declaration has been updated to include the new TLS authentication protocol introduced in TS 1.17. If you attempt to use this declaration on a previous version, it will fail. To use this declaration on previous versions, remove the highlighted lines (and the comma from line 23).

Example Declaration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Kafka",
        "host": "192.0.2.1",
        "protocol": "binaryTcpTls",
        "port": 9092,
        "topic": "f5-telemetry"
    },
    "My_Consumer_SASL_PLAIN_auth": {
        "class": "Telemetry_Consumer",
        "type": "Kafka",
        "host": "192.0.2.1",
        "protocol": "binaryTcpTls",
        "port": 9092,
        "topic": "f5-telemetry",
        "authenticationProtocol": "SASL-PLAIN",
        "username": "username",
        "passphrase": {
        	"cipherText": "passphrase"
        }
    },
    "My_Consumer_TLS_client_auth": {
        "class": "Telemetry_Consumer",
        "type": "Kafka",
        "host": "kafka.example.com",
        "protocol": "binaryTcpTls",
        "port": 9092,
        "topic": "f5-telemetry",
        "authenticationProtocol": "TLS",
        "privateKey": {
        	"cipherText": "-----BEGIN PRIVATE KEY-----\nMIIE...\n-----END PRIVATE KEY-----"
        },
        "clientCertificate": {
            "cipherText": "-----BEGIN CERTIFICATE-----\nMIID...\n-----END CERTIFICATE-----"
        },
        "rootCertificate": {
            "cipherText": "-----BEGIN CERTIFICATE-----\nMIID...\n-----END CERTIFICATE-----"
        }
    }
}

ElasticSearch

Elastic Search

Note

TS 1.24 adds support for sending data to ElasticSearch 7.

Required Information:
  • Host: The address of the ElasticSearch system.
  • Index: The index where data should go within the ElasticSearch system.
Optional Parameters:
  • Port: The port of the ElasticSearch system. Default is 9200.
  • Protocol: The protocol of the ElasticSearch system. Options: http or https. Default is http.
  • Allow Self Signed Cert: allow TS to skip Cert validation. Options: true or false. Default is false.
  • Path: The path to use when sending data to the ElasticSearch system.
  • Data Type: The type of data posted to the ElasticSearch system.
  • API Version: The API version of the ElasticSearch system. Options: Any version string matching the ElasticSearch node(s) version. The default is 6.0.
  • Username: The username to use when sending data to the ElasticSearch system.
  • Passphrase: The secret/password to use when sending data to the ElasticSearch system.

Important

Telemetry Streaming 1.24 and later use the API Version value to determine the appropriate defaults to use for the Data Type parameter.
When the API Version is 6.X or earlier, f5.telemetry is used as the default Data Type.
When the API Version is 7.0 until the last 7.X version, _doc is used as the default Data Type.
In API Version 8.0 and later, the Data Type value is not supported, and will not be accepted in the Telemetry Streaming declaration.


To see more information about installing ElasticSearch, see Installing ElasticSearch documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "ElasticSearch",
        "host": "10.145.92.42",
        "index": "testindex",
        "port": 9200,
        "protocol": "https",
        "dataType": "f5.telemetry",
        "apiVersion": "6.7.2"
    }
}

Sumo Logic

Sumo Logic

Required Information:
  • Host: The address of the Sumo Logic collector.
  • Protocol: The protocol of the Sumo Logic collector.
  • Port: The port of the Sumo Logic collector.
  • Path: The HTTP path of the Sumo Logic collector (without the secret).
  • Secret: The protected portion of the HTTP path (the final portion of the path, sometimes called a system tenant).

Note

To see more information about installing Sumo Logic, see Installing Sumo Logic documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Sumo_Logic",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443,
        "path": "/receiver/v1/http/",
        "passphrase": {
            "cipherText": "secret"
        }
    }
}

StatsD

StatsD

Required Information:
  • Host: The address of the StatsD instance.
  • Protocol: The protocol of the StatsD instance. Options: TCP (TS 1.12+) or UDP. The default is UDP.
  • Port: The port of the StatsD instance

Important

In TS v1.15 and later, if Telemetry Streaming is unable to locate the hostname in the systemPoller data, it sends the metric with hostname value host.unknown. This gets transformed to hostname-unknown as required by StatsD. This is because StatsD uses a . as a delimiter, and TS automatically replaces it with a -. For example:
{
      metricName: ‘f5telemetry.hostname-unknown.customStats.clientSideTraffic-bitsIn’,
      metricValue: 111111030
}

Note

When using the custom endpoints feature, be sure to include /mgmt/tm/sys/global-settings in your endpoints for Telemetry Streaming to be able to find the hostname.

To see more information about installing StatsD, see StatsD documentation on GitHub.

addTags (experimental)

Telemetry Streaming 1.21 adds a new property for the StatsD consumer: addTags. This feature causes TS to parse the incoming payload for values to automatically add as tags. Currently only the sibling method is supported.

To see an example and the output from addTags, see addTags example.


Example Declaration (updated with the experimental feature addTags introduced in TS 1.21. If using a prior version, remove the highlighted lines, and the comma in line 9):

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Statsd",
        "host": "192.0.2.1",
        "protocol": "udp",
        "port": 8125
    },
    "My_Consumer_with_AutoTagging": {
        "class": "Telemetry_Consumer",
        "type": "Statsd",
        "host": "192.0.2.1",
        "protocol": "udp",
        "port": 8125,
        "addTags": {
            "method": "sibling"
        }
    }
}

Generic HTTP

Required Information:
  • Host: The address of the system.
Optional Properties:
  • Protocol: The protocol of the system. Options: https or http. Default is https.
  • Port: The port of the system. Default is 443.
  • Path: The path of the system. Default is /.
  • Method: The method of the system. Options: POST, PUT, GET. Default is POST.
  • Headers: The headers of the system.
  • Passphrase: The secret to use when sending data to the system, for example an API key to be used in an HTTP header.
  • fallbackHosts: List FQDNs or IP addresses to be used as fallback hosts
  • proxy: Proxy server configuration

Note

Since this consumer is designed to be generic and flexible, how authentication is performed is left up to the web service. To ensure the secrets are encrypted within Telemetry Streaming please note the use of JSON pointers. The secret to protect should be stored inside passphrase and referenced in the desired destination property, such as an API token in a header as shown in this example.

New in TS 1.18

Telemetry Streaming 1.17 and later adds the ability to add TLS client authentication to the Generic HTTP consumer using the TLS authentication protocol. This protocol configures Telemetry Streaming to provide the required private key and certificate(s) when the specified HTTP endpoint is configured to use SSL/TLS Client authentication.

You can find more information on Kafka’s client authentication on the Confluent pages: https://docs.confluent.io/5.5.0/kafka/authentication_ssl.html.

There are 3 new properties:

Property Description
privateKey The Private Key for the SSL certificate. Must be formatted as a 1-line string, with literal new line characters.
clientCertificate The client certificate chain. Must be formatted as a 1-line string, with literal new line characters.
rootCertificate The Certificate Authority root certificate, used to validate the client certificate. Certificate verification can be disabled by setting allowSelfSignedCert=true. Must be formatted as a 1-line string, with literal new line characters.

Additional examples for Generic HTTP consumers:


Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Generic_HTTP",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443,
        "path": "/",
        "method": "POST",
        "headers": [
            {
                "name": "content-type",
                "value": "application/json"
            },
            {
                "name": "x-api-key",
                "value": "`>@/passphrase`"
            }
        ],
        "passphrase": {
            "cipherText": "apikey"
        }
    }
}

F5 Beacon

F5 Beacon

F5 Beacon, a SaaS offering, provides visibility and actionable insights into the health and performance of applications.

F5 Beacon uses the generic HTTP consumer. To see an example of Generic HTTP with proxy settings, see Specifying proxy settings for Generic HTTP consumers.

Required Information:
  • See Beacon documentation for information on how to add Telemetry Streaming as a source to Beacon.
  • Host: The address of the system.
  • Protocol: The protocol of the system. Options: https or http. Default is https.
  • Port: The port of the system. Default is 443.
  • Path: The path of the system. Default is /.
  • Method: The method of the system. Options: POST, PUT, GET. Default is POST.
  • Headers: The headers of the system.
  • Passphrase: The secret to use when sending data to the system, for example an API key to be used in an HTTP header.

Example Declaration:

{ 
    "class":"Telemetry",
    "Beacon_Consumer":{ 
       "class":"Telemetry_Consumer",
       "type":"Generic_HTTP",
       "host":"ingestion.ovr.prd.f5aas.com",
       "protocol":"https",
       "port":50443,
       "path":"/beacon/v1/ingest-telemetry-streaming",
       "method":"POST",
       "enable":true,
       "trace":false,
       "headers":[ 
          { 
             "name":"grpc-metadata-x-f5-ingestion-token",
             "value":"`>@/passphrase`"
          }
       ],
       "passphrase":{ 
          "cipherText":"F5 Beacon Access Token"
       }
    }
 }

Fluentd

fluentd

Required Information:
  • Host: The address of the system.
  • Protocol: The protocol of the system. Options: https or http. Default is https.
  • Port: The port of the system. Default is 9880.
  • Path: The path of the system. This parameter corresponds to the tag of the event being sent to Fluentd (see Fluentd documentation for information)
  • Method: The method of the system. This must be POST.
  • Headers: The headers of the system. Important: The content-type = application/json header as shown in the example is required.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Generic_HTTP",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 9880,
        "path": "/fluentd.tag",
        "method": "POST",
        "headers": [
            {
                "name": "content-type",
                "value": "application/json"
            }
        ]
    }
}

Google Cloud Operations Suite’s Cloud Monitoring

Google Cloud

Note

Google recently changed the name of their StackDriver product to Cloud Operations Suite with the monitoring product named Cloud Monitoring.

Required Information:
  • projectId: The ID of the GCP project.
  • serviceEmail: The email for the Google Service Account. To check if you have an existing Service Account, from the left menu of GCP, select IAM & admin, and then click Service Accounts. If you do not have a Service Account, you must create one.
  • privateKeyId: The ID of the private key that the user created for the Service Account (if you do not have a key, from the account page, click Create Key with a type of JSON. The Private key is in the file that was created when making the account).
  • privateKey: The private key given to the user when a private key was added to the service account.

For complete information on deploying Google Cloud Operations Suite, see Google Operations suite documentation.

New in TS 1.22
Telemetry Streaming 1.22 adds a new property reportInstanceMetadata to this consumer. This allows you to enable or disable metadata reporting. The default is false.

Finding the Data
Once you have configured the Google Cloud Monitoring consumer and sent a Telemetry Streaming declaration, Telemetry Streaming creates custom MetricDescriptors to which it sends metrics. These metrics can be found under a path such as custom/system/cpu. To make it easier to find data that is relevant to a specific device, TS uses the Generic Node resource type, and assigns machine ID to the node_id label to identify which device the data is from.

Important

There is a quota of 500 custom MetricDescriptors for Google Cloud Monitoring. Telemetry Streaming creates these MetricDescriptors, and if this quota is ever reached, you must delete some of these MetricDescriptors.

Example Declaration (updated to include reportInstanceMetadata in 1.22, remove this line in previous versions):

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Google_Cloud_Monitoring",
        "privateKey": {
            "cipherText": "yourPrivateKey"
    	},
        "projectId": "yourGoogleCloudMonitoringProjectId",
        "privateKeyId": "yourPrivateKeyId",
        "serviceEmail": "yourServiceEmail",
        "reportInstanceMetadata": false
    }
}

Google Cloud Logging

Google Cloud

Required Information:
  • serviceEmail: The email for the Google Service Account. To check if you have an existing Service Account, from the GCP left menu, select IAM & admin, and then click Service Accounts. If you do not have a Service Account, you must create one.
  • privateKeyId: The ID of the private key the user created for the Service Account (if you do not have a key, from the Account page, click Create Key with a type of JSON. The Private key is in the file that was created when making the account).
  • privateKey: The private key given to the user when a private key was added to the service account.
  • logScopeId: The ID of the scope specified in the logScope property. If using a logScope of projects, this is the ID for your project.
  • logId: The Google Cloud logging LOG_ID where log entries will be written.

For complete information, see the Google Cloud Logging documentation.

Finding the Data
Once you have configured the Google Cloud Logging consumer and sent a Telemetry Streaming declaration, Telemetry Streaming sends log entries directly to Google Cloud Logging. Log entries are written to a logName in Google Cloud Logging, where the logName is generated from the properties in the Telemetry Streaming declaration, using the following format: [logScope]/[logScopeId/logs/[logId] (example: “projects/yourProjectId/logs/yourLogId”).

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Google_Cloud_Logging",
        "logScope": "projects",
        "logScopeId": "yourProjectId",
        "logId": "yourLogId",
        "privateKey": {
            "cipherText": "yourPrivateKey"
    	},
        "privateKeyId": "yourPrivateKeyId",
        "serviceEmail": "yourServiceEmail"
    }
}

F5 Cloud Consumer (F5 Internal)

The F5 Cloud Consumer is a part of F5’s internal, digital experience operating system, a cloud-based analytics platform that helps organizations monitor, operate, and protect digital workflows and optimize their customer’s digital experiences.

Important

This F5 Cloud consumer is for F5 internal use only, and its API is subject to change. We are including it on this page of Push consumers because you may see it in a Telemetry Streaming declaration.

New in TS 1.22
Telemetry Streaming 1.22 adds the eventSchemaVersion property. This allows you to select the appropriate event schema instead of using a hard-coded value.


Example Declaration (if using a version prior to 1.22, remove line 94 (highlighted in yellow):

{
  "class": "Telemetry",
  "AsmPolicyOWASP_Endpoint": {
    "class": "Telemetry_Endpoints",
    "items": {
      "asmpolicyowasp": {
        "path": "/mgmt/tm/asm/owasp/policy-score?$select=policyId,policyScore,securityRisks"
      },
      "asmpolicy": {
        "path": "/mgmt/tm/asm/policies?$expand=historyRevisionReference&$select=id,name,fullPath,virtualServers,history-revisions/revision,history-revisions/activatedAtDatetime"
      },
      "asmsystem": {
        "path": "/mgmt/tm/sys/global-settings?$select=hostname"
      }

    }
  },
  "AsmPolicyOWASP_Poller": {
    "class": "Telemetry_System_Poller",
    "interval": 65,
    "enable": true,
    "trace": true,
    "endpointList": "AsmPolicyOWASP_Endpoint"
  },
  "SystemVIP_Poller": {
    "class": "Telemetry_System_Poller",
    "interval": 65,
    "enable": false,
    "trace": true,
    "allowSelfSignedCert": false,
    "host": "localhost",
    "port": 8100,
    "protocol": "http",
    "actions": [
      {
        "enable": true,
        "includeData": {},
        "locations": {
          "system": true,
          "virtualServers": true
        }
      }
    ]
  },
  "AsmEvent_Listener": {
    "class": "Telemetry_Listener",
    "port": 6514,
    "enable": true,
    "trace": false
  },
  "AsmIncidents_Endpoint": {
    "class": "Telemetry_Endpoints",
    "items": {
      "asmincidents": {
        "path": "/mgmt/tm/asm/events/incidents?$select=virtualServerName,incidentStatus,clientIp,incidentSeverity,durationInSeconds,requestCount,id,policy/name,policy/fullPath,incidentType,incidentSubtype,lastRequestDatetime&$expand=policyReference,incidentTypeReference&$top=100&$orderby=lastRequestDatetime+desc"
      },
      "asmsystem": {
        "path": "/mgmt/tm/sys/global-settings?$select=hostname"
      }
    }
  },
  "AsmIncidents_Poller": {
    "class": "Telemetry_System_Poller",
    "interval": 60,
    "enable": true,
    "trace": false,
    "endpointList": "AsmIncidents_Endpoint"
  },
  "Cloud_Consumer": {
    "allowSelfSignedCert": true,
    "class": "Telemetry_Consumer",
    "type": "F5_Cloud",
    "enable": true,
    "trace": true,
    "f5csTenantId": "a-blabla-a",
    "f5csSensorId": "12345",
    "payloadSchemaNid": "f5",
    "serviceAccount": {
      "authType": "google-auth",
      "type": "service_account",
      "projectId": "deos-dev",
      "privateKeyId": "11111111111111111111111",
      "privateKey": {
        "cipherText": "-----BEGIN PRIVATE KEY-----\nPRIVATEKEY"
      },
      "clientEmail": "test@deos-dev.iam.gserviceaccount.com",
      "clientId": "1212121212121212121212",
      "authUri": "https://accounts.google.com/o/oauth2/auth",
      "tokenUri": "https://oauth2.googleapis.com/token",
      "authProviderX509CertUrl": "https://www.googleapis.com/oauth2/v1/certs",
      "clientX509CertUrl": "https://www.googleapis.com/robot/v1/metadata/x509/test%40deos-dev.iam.gserviceaccount.com"
    },
    "targetAudience": "deos-ingest",
    "eventSchemaVersion": "5"
  },
  "schemaVersion": "1.15.0"
}

DataDog

DataDog

Telemetry Streaming 1.23 added the compressionType property to the DataDog consumer. The acceptable values are none for no compression (the default), or gzip, where the payload will be compressed using gzip.

Telemetry Streaming 1.24 added the following properties to the DataDog consumer:

  • region: The acceptable values are US1 (the default), US3, EU1, and US1-FED.
  • service: The name of the service generating telemetry data (string). The default is f5-telemetry. This property exposes the DATA_DOG_SERVICE_FIELD value that is sent to the DataDog API.

NOTE: The following declaration includes the region and service properties introduced in TS 1.24. If you attempt to use this declaration on a previous version, it will fail. On previous versions, remove the lines highlighted in yellow (and the comma from line 7).

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "DataDog",
        "apiKey": "api_key",
        "compressionType": "gzip",
        "region": "US1",
        "service": "f5-telemetry"
    }
}

OpenTelemetry Exporter (EXPERIMENTAL)

OpenTelemetry

The OpenTelemetry Exporter Consumer exports telemetry data to an OpenTelemetry Collector, or OpenTelemetry Protocol compatible API.

Important

Be aware of the following before deploying this consumer:
* The OpenTelemetry Exporter consumer is an experimental feature and will change in future releases based on feedback.
* There is a possibility this consumer might crash Telemetry Streaming, and send incorrectly formatted data or incomplete metrics.
* Some metrics might lack tags or context (such as iRule events) that will be addressed in future updates.

Required Information:
  • Host: The address of the OpenTelemetry Collector, or OpenTelemetry Protocol compatible API
  • Port: The port of the OpenTelemetry Collector, or OpenTelemetry Protocol compatible API
Optional Properties:
  • metricsPath: The URL path to send metrics telemetry to
  • headers: Any required HTTP headers, required to send metrics telemetry to an OpenTelemetry Protocol compatible API
Note: As of Telemetry Streaming 1.23, this consumer:
  • Only exports OpenTelemetry metrics (logs and traces are not supported)
  • Exports telemetry data using protobufs over HTTP
  • Extracts metrics from Event Listener log messages. Any integer or float values in Event Listener log messages will be converted to an OpenTelemetry metric, and exported as a metric.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "OpenTelemetry_Exporter",
        "host": "192.0.2.1",
        "port": 55681,
        "metricsPath": "/v1/metrics",
        "headers": [
            {
                "name": "x-access-token",
                "value": "YOUR_TOKEN"
            }
        ]
    }
}


Azure Regions

The following table shows an example table when listing regions from the Azure CLI using the command az account list-locations -o table. Note to list Azure Government Regions, you must use az cloud set --name AzureUsGovernment before running the list-locations command.

Important

This list is just a static example, we strongly recommend running the commands yourself to retrieve the current list.
The Name column on the right is the value to use in your Telemetry Streaming declaration

az account list-locations -o table

DisplayName Latitude Longitude Name
East Asia 22.267 114.188 eastasia
Southeast Asia 1.283 103.833 southeastasia
Central US 41.5908 -93.6208 centralus
East US 37.3719 -79.8164 eastus
East US 2 36.6681 -78.3889 eastus2
West US 37.783 -122.417 westus
North Central US 41.8819 -87.6278 northcentralus
South Central US 29.4167 -98.5 southcentralus
North Europe 53.3478 -6.2597 northeurope
West Europe 52.3667 4.9 westeurope
Japan West 34.6939 135.5022 japanwest
Japan East 35.68 139.77 japaneast
Brazil South -23.55 -46.633 brazilsouth
Australia East -33.86 151.2094 australiaeast
Australia Southeast -37.8136 144.9631 australiasoutheast
South India 12.9822 80.1636 southindia
Central India 18.5822 73.9197 centralindia
West India 19.088 72.868 westindia
Canada Central 43.653 -79.383 canadacentral
Canada East 46.817 -71.217 canadaeast
UK South 50.941 -0.799 uksouth
UK West 53.427 -3.084 ukwest
West Central US 40.890 -110.234 westcentralus
West US 2 47.233 -119.852 westus2
Korea Central 37.5665 126.9780 koreacentral
Korea South 35.1796 129.0756 koreasouth
France Central 46.3772 2.3730 francecentral
France South 43.8345 2.1972 francesouth
Australia Central -35.3075 149.1244 australiacentral
Australia Central 2 -35.3075 149.1244 australiacentral2
UAE Central 24.466667 54.366669 uaecentral
UAE North 25.266666 55.316666 uaenorth
South Africa North -25.731340 28.218370 southafricanorth
South Africa West -34.075691 18.843266 southafricawest
Switzerland North 47.451542 8.564572 switzerlandnorth
Switzerland West 46.204391 6.143158 switzerlandwest
Germany North 53.073635 8.806422 germanynorth
Germany West Central 50.110924 8.682127 germanywestcentral
Norway West 58.969975 5.733107 norwaywest
Norway East 59.913868 10.752245 norwayeast

In the following table, we list the Azure Government regions.

az cloud set --name AzureUsGovernment
az account list-locations -o table

DisplayName Latitude Longitude Name
USGov Virginia 37.3719 -79.8164 usgovvirginia
USGov Iowa 41.5908 -93.6208 usgoviowa
USDoD East 36.6676 -78.3875 usdodeast
USDoD Central 41.6005 -93.6091 usdodcentral
USGov Texas 29.4241 -98.4936 usgovtexas
USGov Arizona 33.4484 -112.0740 usgovarizona