Install BIG-IP Next for Kubernetes¶
To install an instance of BIG-IP Next for Kubernetes, apply both an SPKInfrastructure and an SPKInstance CR. Utilizing SPKInfrastructure enables users to set up networking specifications, while SPKInstance CR installs BIG-IP Next for Kubernetes, allowing F5 Orchestrator to access the docker images and helm charts for F5 BIG-IP Next for Kubernetes through FAR. By default, both the SPKInfrastructure CR and SPKInstance CR are applied on default
namespace. The Orchestrator will bring up the pods in default
namespace and shared components in f5-utils
namespace, as described below:
- In the
default
namespace (the namespace in which the Infrastructure and Instance CRs are applied):- F5ingress
- TMM
- AFM
- Open telemetry collector (Otel-collector)
- In the
f5-utils
namespace:- F5TodaFluentd
- CWC
- RabbitMq
- CRD conversion
- CSRC
- DSSM
If you want to have multiple instances of BIG-IP Next for Kubernetes, create a namespace and apply SPKInstance and SPKInfrastructure CRs in that namespace. The pods such as F5Ingress, TMM, AFM, and OTEL are created in the newly created namespace. Meanwhile the shared components such as RabbitMq, DSSM, and so on, are created in f5-utils
namespace.
To install the F5 BIG-IP Next for Kubernetes successfully, complete the following steps:
- Install the Orchestrator
- Create a StorageClass
- Install the SRIOV CNI Plugin
- Create Multus Network Attachment Definitions
- Apply SPKInfrastructure CR
- Apply SPKInstance CR
SPKInfrastructure CR¶
For information on the SPKInfrastructure spec
parameters featured in this example, or for a comprehensive list of available parameters, see SPKInfrastructure CR Parameters.
Prerequisites:¶
- Make sure to provide
spec.egress.json.ipPoolCidrInfo
, See Retrieve the CIDR Values from the Cluster.
To apply SPKInfrastructure CR:
Create a file named
spkinfrastructure-resource.yaml
with the following content:Note: Make sure to update the
cidr
values inegress
section as per cluser configurations.apiVersion: charts.k8s.f5net.com/v1alpha1 kind: SPKInfrastructure metadata: labels: app.kubernetes.io/name: spkinfrastructure app.kubernetes.io/instance: spkinfrastructure-sample app.kubernetes.io/part-of: orchestrator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: orchestrator name: spk-19-infrastructure spec: networkAttachment: - name: default/sf-external - name: default/sf-internal platformType: other hugepages: true sriovResources: nvidia.com/bf3_p0_sf: "1" nvidia.com/bf3_p1_sf: "1" wholeClusterMode: "enabled" calicoRouter: "default" egress: json: ipPoolCidrInfo: cidrList: - name: vlan_cidr value: "15.15.0.0/16" - name: vlan_ipv6_cidr value: "fa00::10:10:3:0/112" - name: node_cidr_ipv4 value: "10.144.39.0/24" - name: node_cidr_ipv6 value: "2620:128:e008:4013::2:0/112" - name: test_cidr value: "" ipPoolList: - name: pod_cidr_ipv4 value: "10.244.0.0/16" - name: pod_cidr_ipv6 value: "fd00:10:244::/48" - name: test_pod_cidr value: ""
Apply the
spkinfrastructure-resource.yaml
indefault
namespace or a namespace of your choice, but you should create afar-secret
in the same namespace.kubectl apply -f spkinfrastructure-resource.yaml -n default
SPKInstance CR¶
For information on the SPKInstance spec
parameters featured in this example, or for a comprehensive list of available parameters, see SPKInstance CR Parameters.
Prerequisites:¶
- To enable Dynamic Routing, set the
spec.tmm.dynamicRouting.enabled
totrue
, See ZebOS ConfigMaps. - To enable TLS Store, set the
spec.tmm.tlsStore.enabled
totrue
, See Managing certs and keys section in F5SPKIngressHTTP2. - To configure persistence for CWC, set the
spec.cwc.persistence.enabled
totrue
and refer thestorageClass
name in the SPKInstance CR. To create storageClass, see Create a Storage Class. - If you choose to use your local registry, make sure to update the
imageRepository
parameter inspkinstance-resource.yaml
and therepository
parameter inorchestrator-values.yaml
(see F5 Orchestrator) with your registry path. - Make sure to update the
imagePullSecrets
to download artifacts from the registry. - Make sure to update the
spec.global.certmgr.issuerRef.name
with actual clusterissuer name, see Configure Cert Manager. - Make sure to create CWC QKView ConfigMap, see Create CWC QKView ConfigMap.
- Make sure to update the
spec.cwc.cpclConfig.jwt
with the actual JWT. - To configure persistence for fluentd, set the
spec.fluentd.persistence.enabled
totrue
and refer thestorageClass
name in the SPKInstance CR. To create storageClass, see Create a Storage Class.
Apply SPKInstance CR¶
Create application namespaces, for example “app-ns”.
Note: Make sure to update the parameter spec.controller.watchnamespace value to created application namespace.
kubectl create namespace app-ns
Create a file named
spkinstance-resource.yaml
with the following contents:apiVersion: charts.k8s.f5net.com/v1alpha1 kind: SPKInstance metadata: name: spkinstance spec: global: certmgr: issuerRef: name: arm-ca-cluster-issuer kind: ClusterIssuer group: cert-manager.io imageRepository: repo.f5.com/images imagePullSecrets: - name: far-secret logging: logLevel: "Info" fluentbitSidecar: enabled: true logLevel: info fluentd: host: 'f5-toda-fluentd.f5-utils.svc.cluster.local' port: "54321" debugging: csmQkview: enabled: true csmOrchestrator: enabled: true prometheus: enabled: true tmm: replicaCount: 1 nodeAssign: nodeSelector: app: f5-tmm tolerations: [] affinity: {} egress: useSnatpools: false dnsNat46Enabled: false dnsNat46UpstreamDnsIP: "" dnsNat46Ipv4Subnet: "" dnsNat46SorryIP: "" dnsCacheName: "" tlsStore: enabled: false service: create: true name: f5-tmm-service type: LoadBalancer externalTrafficPolicy: Local annotations: {} externalIPs: [] customPorts: [] sessiondb: useExternalStorage: "false" resources: limits: cpu: "1" hugepages-2Mi: "3Gi" memory: "2Gi" palCPUSet: "1" usePhysMem: true tmmMapresHugepages: 1024 tmmMapresHalt: false tmmMTU: 1500 xnetDPDKAllow: - auxiliary:mlx5_core.sf.4,dv_flow_en=2 - auxiliary:mlx5_core.sf.5,dv_flow_en=2 dynamicRouting: enabled: true exportZebosLogs: true configMapName: spk-bgp blobd: enabled: true resources: limits: cpu: "1" memory: "4Gi" requests: cpu: "1" memory: "4Gi" debug: enabled: true resources: limits: cpu: "500m" memory: "1Gi" requests: cpu: "500m" memory: "1Gi" tmrouted: resources: limits: cpu: "300m" memory: "512Mi" requests: cpu: "300m" memory: "512Mi" tmmRouting: resources: limits: cpu: "700m" memory: "1Gi" requests: cpu: "700m" memory: "1Gi" controller: watchNamespace: "app-ns" egress: snatpoolName: "egress_snatpool" fluentd: persistence: enabled: true accessMode: - ReadWriteOnce size: 3Gi component: cm_logs: enabled: true stdout: true crdconversion_logs: enabled: true stdout: true cwc_logs: enabled: true stdout: true dnsx_logs: enabled: true stdout: true downloader_logs: enabled: true stdout: true dssm_logs: enabled: true stdout: true dssm_sentinel_logs: enabled: true stdout: true f5_csm_qkview: enabled: true cert_manager: enabled: true f5_fqdn_resolver_logs: enabled: true stdout: true f5ingress_logs: enabled: true stdout: true spk_csrc_logs: enabled: true stdout: true rabbitmq_logs: enabled: true stdout: true pccd_logs: enabled: true stdout: true cwc: persistence: enabled: true accessMode: - "ReadWriteOnce" size: "3Gi" cpclConfig: operationMode: "disconnected" jwt: "" afm: enabled: true pccd: enabled: true blob: maxFwBlobSizeMb: "512" maxNatBlobSizeMb: "512" sidecar: resources: limits: cpu: 700m memory: 1Gi requests: cpu: 700m memory: 1Gi tmstats: tmstatsConfig: resources: limits: cpu: 700m memory: 1Gi requests: cpu: 700m memory: 1Gi spkManifest: spk-19-manifest spkInfrastructure: spk-infrastructure
Note: If the
spec.afm.enabled
value is set totrue
then you must set thespec.afm.pccd.enabled
value totrue
”Apply the
spkinstance-resource.yaml
indefault
namespace or a namespace of your choosing, but you should create afar-secret
in the same namespace.Note: If
spec.cpclConfig.operationMode
is set to disconnected, to activate the license, see License the cluster in Disconnected Mode section in BIG-IP Next for Kubernetes Licensing.kubectl apply -f spkinstance-resource.yaml -n default
Verify the pvc.
kubectl get pvc -n f5-utils
Sample Response:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE cluster-wide-controller Bound cluster-wide-controller-pv 3Gi RWO <unset> 12m
Next Steps: