Consumer class

Use this section to find example declarations and notes for supported consumers.

Important

Each of the following examples shows only the Consumer class of a declaration and must be included with the rest of the base declaration (see Components of the declaration).

Splunk

Splunk

Required information:
  • Host: The address of the Splunk instance that runs the HTTP event collector (HEC).
  • Protocol: Check if TLS is enabled within the HEC settings Settings > Data Inputs > HTTP Event Collector.
  • Port: Default is 8088, this can be configured within the Global Settings section of the Splunk HEC.
  • API Key: An API key must be created and provided in the passphrase object of the declaration, refer to Splunk documentation for the correct way to create an HEC token.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Splunk",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 8088,
        "passphrase": {
            "cipherText": "apikey"
        }
    }
}

Splunk Legacy format

The format property can be set to legacy for Splunk users who wish to convert the stats output similar to the F5 Analytics App for Splunk. To see more information, see F5 Analytics iApp Template documentation. To see more information about using the HEC, see Splunk HTTP Event Collector documentation. See the following example.

Note

To poll for any data involving tmstats you must have a Splunk consumer with the legacy format as described in this section. This includes GET requests to the SystemPoller API because the data is not pulled unless it is a legacy Splunk consumer.

Telemetry Streaming 1.7.0 and later gathers additional data from tmstats tables to improve compatibility with Splunk Legacy consumers.

In Telemetry Streaming v1.6.0 and later, you must use the facility parameter with the legacy format to specify a Splunk facility in your declarations. The facility parameter is for identification of location/facility in which the BIG-IP is located (such as ‘Main Data Center’, ‘AWS’, or ‘NYC’).

Required information for facility:
  • The facility parameter must be inside of actions and then setTag as shown in the example.
  • The value for facility is arbitrary, but must be a string.
  • The locations property must include "system": true, as that is where facility is expected.
  • The value for facility is required when the format is legacy (required by the Splunk F5 Dashboard application; a declaration without it will still succeed)

Example Declaration for Legacy (including facility):

{
    "class": "Telemetry",
      "My_System": {
          "class": "Telemetry_System",
          "systemPoller": {
            "interval": 60,
            "actions": [
              {
                "setTag": {
                  "facility": "facilityValue"
                },
                "locations": {
                  "system": true
                }
              }
            ]
          }
      },
      "My_Consumer": {
          "class": "Telemetry_Consumer",
          "type": "Splunk",
          "host": "192.0.2.1",
          "protocol": "https",
          "port": 8088,
          "passphrase": {
              "cipherText": "apikey"
          },
                  "format": "legacy"
      }
  }

Microsoft Azure Log Analytics

Microsoft Azure

Required Information:
  • Workspace ID: Navigate to Log Analytics workspace > Advanced Settings > Connected Sources.
  • Shared Key: Navigate to Log Analytics workspace > Advanced Settings > Connected Sources and use the primary key.

Note

To see more information about sending data to Log Analytics, see HTTP Data Collector API documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Azure_Log_Analytics",
        "workspaceId": "workspaceid",
        "passphrase": {
            "cipherText": "sharedkey"
        }
    }
}

Example Dashboard:

The following is an example of the Azure dashboard with Telemetry Streaming data. To create a similar dashboard, see Azure dashboard. To create custom views using View Designer, see Microsoft documentation.

azure_log_analytics_dashboard


AWS Cloud Watch

Amazon Web Services

Required information:
  • Region: AWS region of the cloud watch resource.
  • Log Group: Navigate to Cloud Watch > Logs
  • Log Stream: Navigate to Cloud Watch > Logs > Your_Log_Group_Name
  • Access Key: Navigate to IAM > Users
  • Secret Key: Navigate to IAM > Users

Note

To see more information about creating and using IAM roles, see AWS Identity and Access Management (IAM) documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "AWS_CloudWatch",
        "region": "us-west-1",
        "logGroup": "f5telemetry",
        "logStream": "default",
        "username": "accesskey",
        "passphrase": {
            "cipherText": "secretkey"
        }
    }
}

AWS S3

Amazon Web Services

Required Information:
  • Region: AWS region of the S3 bucket.
  • Bucket: Navigate to S3 to find the name of the bucket.
  • Access Key: Navigate to IAM > Users
  • Secret Key: Navigate to IAM > Users

Note

To see more information about creating and using IAM roles, see AWS Identity and Access Management (IAM) documentation.

Example Declaration:

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "AWS_S3",
        "region": "us-west-1",
        "bucket": "bucketname",
        "username": "accesskey",
        "passphrase": {
            "cipherText": "secretkey"
        }
    }
}

Graphite

Graphite

Required Information:
  • Host: The address of the Graphite system.
  • Protocol: Check Graphite documentation for configuration.
  • Port: Check Graphite documentation for configuration.

Note

To see more information about installing Graphite, see Installing Graphite documentation. To see more information about Graphite events, see Graphite Events documentation.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Graphite",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443
    }
}

Kafka

Kafka

Required Information:
  • Host: The address of the Kafka system.
  • Port: The port of the Kafka system.
  • Topic: The topic where data should go within the Kafka system
  • Protocol: The port of the Kafka system. Options: binaryTcp or binaryTcpTls. Default is binaryTcpTls
  • Authentication Protocol: The protocol to use for authentication process. Options: SASL-PLAIN or None. Default is None.
  • Username: The username to use for authentication process.
  • Password: The password to use for authentication process.

Note

To see more information about installing Kafka, see Installing Kafka documentation.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Kafka",
        "host": "192.0.2.1",
        "protocol": "binaryTcpTls",
        "port": 9092,
        "topic": "f5-telemetry"
    },
    "My_Consumer_SASL_PLAIN_auth": {
        "class": "Telemetry_Consumer",
        "type": "Kafka",
        "host": "192.0.2.1",
        "protocol": "binaryTcpTls",
        "port": 9092,
        "topic": "f5-telemetry",
        "authenticationProtocol": "SASL-PLAIN",
        "username": "username",
        "passphrase": {
        	"cipherText": "passphrase"
        }
    }
}

ElasticSearch

Elastic Search

Required Information:
  • Host: The address of the ElasticSearch system.
  • Index: The index where data should go within the ElasticSearch system.
Optional Parameters:
  • Port: The port of the ElasticSearch system. Default is 9200.
  • Protocol: The protocol of the ElasticSearch system. Options: http or https. Default is http.
  • Allow Self Signed Cert: allow TS to skip Cert validation. Options: true or false. Default is false.
  • Path: The path to use when sending data to the ElasticSearch system.
  • Data Type: The type of data posted to the ElasticSearch system. Default is f5.telemetry
  • API Version: The API version of the ElasticSearch system.
  • Username: The username to use when sending data to the ElasticSearch system.
  • Passphrase: The secret/password to use when sending data to the ElasticSearch system.

Note

To see more information about installing ElasticSearch, see Installing ElasticSearch documentation.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "ElasticSearch",
        "host": "10.145.92.42",
        "index": "testindex",
        "port": 9200,
        "protocol": "https",
        "dataType": "f5.telemetry"
    }
}

Sumo Logic

Sumo Logic

Required Information:
  • Host: The address of the Sumo Logic collector.
  • Protocol: The protocol of the Sumo Logic collector.
  • Port: The port of the Sumo Logic collector.
  • Path: The HTTP path of the Sumo Logic collector (without the secret).
  • Secret: The protected portion of the HTTP path (the final portion of the path, sometimes called a system tenant).

Note

To see more information about installing Sumo Logic, see Installing Sumo Logic documentation.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Sumo_Logic",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443,
        "path": "/receiver/v1/http/",
        "passphrase": {
            "cipherText": "secret"
        }
    }
}

StatsD

StatsD

Required Information:
  • Host: The address of the StatsD instance.
  • Protocol: The protocol of the StatsD instance. The default is UDP.
  • Port: The port of the Statsd instance

Note

To see more information about installing StatsD, see StatsD documentation on GitHub.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Statsd",
        "host": "192.0.2.1",
        "protocol": "udp",
        "port": 8125
    }
}

Generic HTTP

Required Information:
  • Host: The address of the system.
  • Protocol: The protocol of the system. Options: https or http. Default is https.
  • Port: The port of the system. Default is 443.
  • Path: The path of the system. Default is /.
  • Method: The method of the system. Options: POST, PUT, GET. Default is POST.
  • Headers: The headers of the system.
  • Passphrase: The secret to use when sending data to the system, for example an API key to be used in an HTTP header.

Note

Since this consumer is designed to be generic and flexible, how authentication is performed is left up to the web service. To ensure the secrets are encrypted within Telemetry Streaming please note the use of JSON pointers. The secret to protect should be stored inside passphrase and referenced in the desired destination property, such as an API token in a header as shown in this example.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Generic_HTTP",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443,
        "path": "/",
        "method": "POST",
        "headers": [
            {
                "name": "content-type",
                "value": "application/json"
            },
            {
                "name": "x-api-key",
                "value": "`>@/passphrase`"
            }
        ],
        "passphrase": {
            "cipherText": "apikey"
        }
    }
}

Note

If multiple secrets are required, defining an additional secret within Shared and referencing it using pointers is supported. For more details about pointers see the section on Pointer Syntax.

Example with multiple passphrases:

{
    "class": "Telemetry",
    "Shared": {
        "class": "Shared",
        "secretPath": {
            "class": "Secret",
            "cipherText": "/?token=secret"
        }
    },
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Generic_HTTP",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 443,
        "path": "`>/Shared/secretPath`",
        "method": "POST",
        "headers": [
            {
                "name": "content-type",
                "value": "application/json"
            },
            {
                "name": "x-api-key",
                "value": "`>@/passphrase`"
            }
        ],
        "passphrase": {
            "cipherText": "apikey"
        }
    }
}

Fluentd

fluentd

Required Information:
  • Host: The address of the system.
  • Protocol: The protocol of the system. Options: https or http. Default is https.
  • Port: The port of the system. Default is 9880.
  • Path: The path of the system. This parameter corresponds to the tag of the event being sent to Fluentd (see Fluentd documentation for information)
  • Method: The method of the system. This must be POST.
  • Headers: The headers of the system. Important: The content-type = application/json header as shown in the example is required.
{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Generic_HTTP",
        "host": "192.0.2.1",
        "protocol": "https",
        "port": 9880,
        "path": "/fluentd.tag",
        "method": "POST",
        "headers": [
            {
                "name": "content-type",
                "value": "application/json"
            }
        ]
    }
}

Google StackDriver

Google StackDriver

Required Information:
  • projectId: The ID of the GCP project.
  • serviceEmail: The email for the Google Service Account. To check if you have an existing Service Account, from the left menu of GCP, select IAM & admin, and then click Service Accounts. If you do not have a Service Account, you must create one.
  • privateKeyId: The ID of the private key that the user created for the Service Account (if you do not have a key, from the account page, click Create Key with a type of JSON. The Private key is in the file that was created when making the account).
  • privateKey: The private key given to the user when a private key was added to the service account.

For complete information on deploying StackDriver, see StackDriver documentation.

Finding the Data
Once you have configured the StackDriver consumer and sent a Telemetry Streaming declaration, Telemetry Streaming creates custom MetricDescriptors to which it sends metrics. These metrics can be found under a path such as custom/system/cpu. To make it easier to find data that is relevant to a specific device,TS uses the Generic Node resource type, and assigns machine ID to the node_id label to identify which device the data is from.

Important

There is a quota of 500 custom MetricDescriptors for StackDriver Monitoring. Telemetry Streaming creates these MetricDescriptors, and if this quota is ever reached, you must delete some of these MetricDescriptors.

{
    "class": "Telemetry",
    "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "Google_StackDriver",
        "privateKey": {
            "cipherText": "yourPrivateKey"
        },
        "projectId": "yourGoogleStackDriverProjectId",
        "privateKeyId": "yourPrivateKeyId",
        "serviceEmail": "yourServiceEmail"
    }
}