LLM Insights

Overview

Large Language Model (LLM) Insights bring natural language intelligence to F5 Insight, enabling you to query your BIG-IP configurations and logs conversationally. Instead of manually searching through configurations or parsing log files, you can ask questions like “Why is pool member X marked down?” or “Show me all virtual IPs (VIPs) with SSL offloading enabled” and receive immediate, contextualized, clear answers.

LLM Insights also generates operational narratives and provides actionable recommendations for configuration best practices, security hardening, performance optimization, and troubleshooting next steps based on your environment’s data - all based on 30 years of F5 application delivery and security expertise. This robust operational intelligence helps teams streamline and improve operational workflows.

Benefits and Key Features

  • Reduced Mean Time to Resolution - Get answers to complex configuration and log questions in seconds rather than manually correlating data across multiple BIG-IP devices.
  • Simplified Management - Enable less experienced administrators to effectively troubleshoot and manage F5 infrastructure using natural language.
  • Actionable Recommendations - Receive context-aware suggestions for improving security posture, optimizing performance, and resolving issues based on your actual configurations and telemetry data.

Prerequisites

  • API Credentials - Obtain an API key or token from your chosen, currently supported LLM provider (OpenAI, Anthropic, or a local/self-hosted LLM).
  • Network Connectivity - Ensure F5 Insight can reach your LLM provider’s API endpoint (outbound HTTPS) and that any required firewall rules are in place for both inbound and outbound traffic.

Data Privacy Consideration

When using external LLM providers (OpenAI, Anthropic and so on), configuration data and log excerpts are sent to third-party APIs for processing. F5 recommends reviewing your organization’s data handling policies and your LLM provider’s data retention practices before enabling this feature. For environments with strict data residency requirements, consider using a locally hosted LLM.

Configuration Steps

  1. In the left toolbar, navigate to the Manage section
  2. Select LLM Insights
  3. Click Enable LLM Insights
  4. Choose your LLM provider: OpenAI, Anthropic, or Local LLM
  5. Enter the API URL and API Key or Token for your selected provider
  6. Click Test Connection
  7. Save LLM of choice

Provider Configuration Examples

Provider API URL API Key
OpenAI https://api.openai.com/v1 Obtain from platform.openai.com/api-keys
Anthropic https://api.anthropic.com Obtain from console.anthropic.com
Local LLM Varies by deployment (example, http://acme.example.com:11434/v1 for Ollama) Consult your local LLM documentation

For other LLM providers, consult the provider’s documentation for the correct API URL and key generation process. Any OpenAI-compatible API endpoint should work with the Local LLM option.

Example Queries

Once enabled, you can input questions and requests into LLM Insights using natural language and it will respond in kind. Example queries include:

  • “Why is pool member 10.1.1.50 marked down?”
  • “Show me all virtual servers with SSL offloading enabled.”
  • “What configuration changes occurred in the last 24 hours?”
  • “Which pools have no healthy members?”