Workflow user guide

Workflows are automation process algorithms. They describe the flow of the automation by determining which tasks to run and when to run these tasks. A task is an operation (implemented by a plugin), or other actions including running arbitrary code. Workflows are written in Python, using a dedicated framework and APIs.

In this guide you will learn:

Execute workflows

For the main solution blueprint you deployed, you must run the Install workflow. Once your blueprint installs, you will see multiple deployments created automatically. The list of workflows will change and display only applicable workflows that you can run for each deployment type.

  1. Click Deployments, and next to the main Gilan deployment, click menuIcon_use.
  2. Select the Install workflow, and then click Execute. Once your install workflow completes, you will see a list of auto-created deployments which have workflows applicable to those deployments. The following table describes all the available VNFM workflows:
Workflow Used for
Generate report Creating a resource-throughput usage report to send to F5 for billing purposes.
Heal Heal workflow will heal failing DAG instances, slave instances, and entire layers (but never master instances), by creating a new copy of the reported, dysfunctional instance or layer.
Install Installing the target deployment, and lifecycle operations on instances (for example, create and configure start).
Uninstall Uninstalling the target deployment, freeing allocated resources, and performing uninstall lifecycle operations (for example, stop and delete). This workflow also removes deployments and blueprints created during the install workflow. Parameter includes ignore_failure, which passes the workflow upon a failed lifecycle operation.
Purge Layer Uninstalling and removing dysfunctional VNF layer instances. Start this workflow manually, after the heal layer workflow runs and the problem investigation is finished. Parameter includes ignore failure passed to lifecycle uninstall process (see the following Uninstall workflow).
Purge VE Uninstalling and removing dysfunctional VNF VE instances, and related objects. Start this workflow manually, after the heal layer workflow runs and the problem investigation is finished. Parameter includes ignore failure passed to lifecycle uninstall process (see the following Uninstall workflow).
Scale In Group

Removing and uninstalling DAG group, adding VEs and the VNF group, adding layers. This workflow finds instances to remove (based on parameters) and uninstalls and removes all specified instances and all related instances. Parameters include:

  • instance ids-JSON encoded list of strings. Each string represents an instance ID, and functions include uninstall and remove instance IDs, and related instances.
  • deployment ids-DAG VE and VNF layer deployment IDs to remove.
  • ignore failure-boolean value passed to the lifecycle uninstall process (see the following Uninstall workflow). If true, then a failed task during the uninstall of an instance will be ignored, and the execution will continue.
  • ignore_not_found-boolean value passed to the lifecycle uninstall process (see the following Uninstall workflow). If true, then the execution will ignore instance IDs and deployment IDs that it cannot find.

Failed execution can have already, partially remove some resources from external systems and remove instances. In this case, execute the workflow again with ignore_failure flag set to true. You must check the VIM and other external systems for leftover, reserved resources.

Scale Out Group Adding DAG group VE instances and VNF group layer instances. Parameter includes add instances, which is the number of new instance VEs to create. If failure occurs due to scaling limits, this does not affect the service. Other failed executions can be a result of resources already reserved to create instances. In this case, remove the failed instance by running the Gilan scale in group workflow [gilan_scale_in_group] and providing the failed (instance id)[instances].
Scale Out Layer Creating and installing slave nodes to new VNF layer. Parameter includes add instances, which is the number of managed, VNF slave VEs to add to the target deployment. If failure occurs due to scaling limits, this does not affect the service. Other failed executions can be a result of resources already reserved to create instances. In this case, remove the failed instance by running the Gilan scale in layer workflow and providing the failed (instance id)[instances].
Update as 3 Updating the AS3 declaration pushed to the VE as a part of NSD definition. Run this workflow after editing and your AS3 declaration and uploading your changed, main-solution, blueprint inputs file.
Upgrade DAG Group Upgrade process enabling the rollout of a new version of BIG IP in DAG group. Creating new DAG group of VEs using new software reference data. Workflow selects older VEs with lesser revision value and disables them. Parameter includes instance count, which is the number of instances (DAG VEs) to upgrade.
Upgrade VNF Group Upgrade process enabling the rollout of a new version of BIG IP in VNF group. Creating new VNF layer, using new software reference data, and maintaining the same number of slave VEs as in the selected layer. This workflow will also disable older layers (with lesser revision values). Parameter includes layer deployment id identifying the layer selected for upgrade.
Upgrade Group Start

Starting the upgrade process and setting new software reference data for both DAG group and VNF group. You must provide the revision number and software reference details (image id, flavor) for the hypervisor. The revision number is used during the VNF Layer upgrade process. Parameters include JSON encoded dictionary containing definition of new software:

  • image-for new BIG-IP (for example, BIGIP-13.1.0.7-0.0.1.ALL_1SLOT)
  • flavor-for new BIG IP (for example, f5.cloudify_small)
  • revision-revision number that is incremented with every upgrade. Instances with revision lower than the number of the upgrade image provided, is considered as using old version of the software.

Example: {"data":{"image":"BIGIP-13.1.0.7-0.0.1.ALL_1SLOT","flavor":"f5.cloudify_small"}}

Upgrade Group Finish Finishing the upgrade process for both DAG group and VNF group, using the new software reference data to install scaled and healed VEs, as well as other normal operations.
Configuration update Part of the install, upgrade, and heal workflows that updates the DAG pool member configuration. Currently, not run manually.
Admin state disable Manually run this workflow on a VNF layer you want to eventually deprecate. This workflow stops new traffic connections (bleeds traffic) on a layer and diverts that traffic to other layers in the data center. For example, you run this workflow in conjunction with the Scale Out and Scale workflows when moving service layers from one data center to another.
Update member

Updates the DAG pool membership of a slave during a heal workflow. Parameters include:

  • adminState-the admin state value of the member being updated.
  • enable-the enable value for the member being updated.
  • servicePort-the service port assigned to the member being updated.
Deregister Manually run this workflow on a slave node to remove it from the DAG group and manually fail over the traffic to another DAG group. This workflow is an automated process in the Heal and Upgrade workflows.
  1. You can cancel deployments at anytime by clicking the X in the Cancel_use popup workflow notification.
  2. Learn more about which workflows you run for specific scenarios.

Workflow-deployment matrix

Use the following table to assist you with deciding which workflow to run for which deployment for specific use case scenarios:

Deployments -> Execute Workflow

Use Workflow Select Deployment with Blueprint Use Case
Install Run this workflow on ALL deployments To install Gi-LAN, Firewall, or base solution blueprints on your target, virtualization infrastructure management (VIM) resource.
Scale out layer vnf_layer, vnf_ve_slave To reach the throughput capacity that you purchased, scale out your vnf layer and install slave members (VEs) to that VNF layer.
Scale out group vnf_layer, vnf_group To reach the throughput capacity that you purchased, scale out your DAG or VNF group and install group members and instances; add VEs to the DAG group and/or add layers to the VNF group.
Scale in group vnf_layer, vnf_group To meet the scaling parameters you specified, scale in and remove > uninstall > delete all VEs and related instances from your DAG group or layers and related instances from your VNF group.
Uninstall Run this workflow on ALL deployments To uninstall Gi-LAN, Firewall, or base solution blueprints on your target VIM, releasing all resource from that VIM.
Generate Report Run on base/main deployment/blueprint (Gi-LAN or Gi-Firewall) only. To generate VNFM billing (utility usage report) for sending to F5 Networks.
Upgrade Start > Upgrade > Upgrade Finish vnf_group To upgrade BIG-IPs for a VNF Group, start upgrade using the new BIG-IP image, upgrade the group bleeding traffic from the older group being upgraded, and then finish the upgrade bringing down the older VNF group.
Admin state disable VNF_layer To bleed new traffic from a layer you plan to eventually scale in.
Purge VNF_layer To uninstall and remove dysfunctional VNF VE instances, and related objects. Start this workflow manually, after the heal layer workflow runs and the problem investigation is finished.
Deregister VNF_slave To remove a slave node from a DAG or VNF group, and manually fail over traffic to another group.
Configuration Update VNF member To update the AS3 configuration of a DAG pool member.
Update Member VNFD_VNF To update a DAG pool membership of a slave during a heal workflow.
Update AS 3 VNF_NSD To update the AS3 declaration pushed to the VE as a part of NSD definition, after updating the AS3 declaration for your main blueprint.

Workflow scenarios

The following scenarios are examples of manually running specific workflows:

Moving a layer to a new data center

In this example, a telco has two service areas (SA-1 and SA-2). Service demand has dictated that SA-2 requires more capacity than SA-1. The telco has purchased two 10GB-layers in each SA (layers 1 and layers 2) for a total of four layers. Therefore, in order to increase capacity in one SA-2, first stand up a new (temporary) layer in SA-2, and then bring down a service layer in SA-1. F5 VNFM enables customers to have more layers than originally purchased for a 48-hour grace period. Therefore, this capacity move to a new SA must occur within 48 hours to avoid being charged for the fifth (in this example), additional (temporary) layer in SA-2.

To move a layer from SA-1 to SA-2, manually run the following workflows:

  • Scale Out (SA-2)
  • Admin state disable (SA-1)
  • Scale In (SA-1)
  1. In VNF Manager for the SA-2 where capacity requirements increase, click the Deployments blade, click the VNF Group deployment (for example, vnf_group), at the top of the window click Execute workflow, and then select Scale Out from the list.

  2. On the Execute workflow scale_out popup window, in the add_instances text box, enter the number of new instances you want to add to this group. Default value is 1.

    The scale out workflow stands up a new (fifth) layer in SA-2 where capacity demand has increased.

    _images/scale-out-move.png
  3. Currently, you have five layers. 48 hours before bringing down a layer in SA-1, click the Deployments blade, click the VNF_layer deployment to bring down (layer 2), at the top of the window click Execute workflow, and then select Admin state disable from the list.

    This workflow bleeds new traffic from the VNF layer 2, preventing all new traffic connections, and diverts all new traffic to layer 1 in SA-1. Keep this layer in Admin state disable for approximately 40 hours, allowing all existing connections on layer 2 to finish before bringing down that layer.

    _images/ASD-move.png
  4. Near the 48-hour threshold, in SA-1, click the Deployments blade, click the VNF_layer deployment to bring down (layer 2), at the top of the window click Execute workflow, and then select Scale In from the list.

  5. On the Execute workflow scale_in popup window, to assign a specific layer or slave, in the deployment_id text box, enter the deployment ID for the VNF layer/slave you are scaling-in, and leave all other default values. You can find this deployment ID at the top of the VNF layer/slave deployment page:

    _images/vnflayer-deploy-id.png

    This workflow destroys layer 2 in data center-1 where the capacity need has decreased. Now you have one layer in data center-1 and three layers in data center-2, returning to your originally purchased, four-layer capacity.

    _images/scale-in-move.png

    Important

    To avoid purchasing more capacity than your original purchase, coordinate the timing of your layer deprecation, so your additional (fifth) layer in SA-2 (in step 1) does NOT exceed the 48-hour grace period.

Upgrading BIG-IPs in a VNF group

You can upgrade both VNF and DAG groups. In this example, you will upgrade your BIG-IPs in your VNF group from version BIGIP-13.1.0.5 to version BIGIP-13.1.0.7. Manually upgrading involves executing the following workflows:

  • Upgrade Start
  • Upgrade
  • Admin state disable
  • Scale in
  • Upgrade Finish
_images/upgrade-wf.png

Important

PREREQUISITES:

  • In your VIM project (for example, OpenStack), you must upload the new BIG-IP image (for example, BIGIP-13.1.0.7) to which you are upgrading PRIOR to performing this workflow scenario. Point your browser to BIG-IP 13.1.0.X for F5 image downloads.

  • This upgrade process makes a copy of the VNF group being upgraded (including all layers); therefore, you must have sufficient vCPU and storage capacity to temporarily sustain two VNF groups. Otherwise, your upgrade workflow will fail.

    Tip

    If you do not have the required resources in the service area being upgraded, you can utilize the 48-hour grace period for deploying extra layers and avoid failure by temporarily scaling out layers in another service area, and then scaling in layers in the VNF group being upgraded. After the upgrade completes, you can redistribute the VNF layers amongst the service areas, accordingly. See the previous workflow scenario for complete steps.

  1. In VNF Manager, click the Deployments blade, click the VNF_group deployment you want to upgrade, at the top of the window click Execute workflow, and then click Upgrade Start.

  2. On the Execute workflow upgrade_start popup window, in the data text box, enter the dictionary parameters (in JSON format) for the VNF group you want to upgrade using the image ID for the new image to which you want to upgrade (for example, BIGIP-13.1.0.7-0.0.1.ALL_1SLOT). The image value is case-sensitive and must match the image name as entered in your VIM project.

    _images/wf-upgrade-start.png

    You can find this dictionary in YAML format in your inputs file, for example in OpenStack:

    sw_ref_vnf:
      data:
          image: BIGIP-13.1.0.7-0.0.1.ALL_1SLOT
          flavor: m1.xlarge
          availability_zone: nova
    

    Convert the previous YAML dictionary to JSON format, for example:

    { "image": "BIGIP-13.1.0.7-0.0.1.ALL_1SLOT", "flavor":"m1.xlarge", "availability_zone":"nova" }
    
  3. In the revision text box, increment that value by at least 1, leave all other default values, and then click Execute.

    This workflow defines the image which to upgrade the selected group. To verify the status of this workflow, click the VNF_group deployment, and scroll down to the Deployment Executions pane and in the Upgrade Start workflow row, the status column should display, Complete.

    _images/upgrade-start.png
  4. In VNF Manager, click the Deployments blade, click the VNF_group deployment you want to upgrade, at the top of the window click Execute workflow, and then click Upgrade.

  5. On the Execute workflow upgrade popup window, in the deployment_id text box enter the deployment ID of the layer you want to upgrade. To find the layer ID, on the Deployments blade, click the VNF_layer being upgraded and at the top of the page, copy the ID.

    _images/vnflayer-deploy-id.png

    This workflow creates a copy of the group and all layers within that group, running the new version of the BIG-IP image defined in step 2. To verify the status of this workflow, click the VNF_group deployment, and scroll down to the Deployment Executions pane and in the Upgrade workflow row, the status column should display, Complete.

    _images/upgrade.png

    Important

    At this point you will want to leave both VNF layers running, to verify that the upgraded copy is running successfully. The duration will vary depending upon your organization’s standard operating procedures. If you experience problems with the upgraded layer, you can rollback the upgrade by running the Admin state disable workflow (see next step) on the new, upgraded copy of the VNF layer.

  6. In VNF Manager, click the Deployments blade, click the VNF_layer you are upgrading, at the top of the window click Execute workflow, and then click Admin state disable.

    Tip

    To find the layer ID of the old layer you are scaling in, click the Deployments blade, click the VNF_group being upgraded, scroll down to the Nodes pane, click the layer_id row, and the old layer will appear first in the list. Using the first layer ID in the list, on the Deployments blade search for the layer ID. Click that layer and at the top of the window, copy that ID.

    This workflow bleeds new traffic off the older VNF group being upgraded and moves new traffic to the newly upgraded copy of the VNF group. Allow enough time for existing transactions to complete, before executing the Scale In workflow (step 7). To verify the status of this workflow, click the VNF_layer deployment, and scroll down to the Deployment Executions pane and in the Admin state disabled workflow row, the status column should display, Complete.

    _images/ASD-upgrade.png
  7. To scale in the old VNF layer, on the Deployments blade, click the old VNF layer (running the old version of BIG-IP), at the top of the window click Execute workflow, enter the deployment_id for the old layer using the following format, and then click Scale In.

    ["gilanTestUG_vnf_group_f00jl8_vnf_layer_90ew6m_layer_id_q5dm8i"]
    

    Tip

    To find the layer ID of the old layer you are scaling in, click the Deployments blade, click the VNF_group being upgraded, scroll down to the Nodes pane, click the layer_id row, and the old layer will appear first in the list. Using the first layer ID in the list, on the Deployments blade search for the layer ID. Click that layer and at the top of the window, copy that ID.

    This workflow destroys the old layer running the old version of BIG-IP. Now you have one upgraded layer, running the new version of BIG-IP. To verify the status of this workflow, click the VNF_layer deployment, and scroll down to the Deployment Executions pane and in the Scale In workflow row, the status column should display, Complete.

    _images/upgrade-scale-in.png
  8. In VNF Manager, click the Deployments blade, click the VNF_group deployment you are upgrading, at the top of the window click Execute workflow, and then click Upgrade Finish.

  9. On the Execute workflow upgrade_finish popup window, leave all other default values, and then click Execute.

    This workflow clears the upgrade definition form you completed in Step 1, the Upgrade Start workflow. To verify the status of this workflow, click the VNF_group deployment, and scroll down to the Deployment Executions pane and in the Upgrade Finish workflow row, the status column should display, Complete.

    _images/upgrade-finish.png

Post auto-heal process

If deploying the Gi LAN or Gi-Firewall blueprint, the Heal workflow will run automatically on a VNF layer, DAG layer, or slave deployment, when nodes become non-responsive or lose connectivity with the VNF master node.

  1. When a heal workflow runs automatically, the heal started indicator will appear on the Deployments blade, in the VNF_layer/slave_ID deployment type row:

    _images/auto-heal.png

    You can also see when the heal workflow runs automatically on the Dashboard -> Executions pane:

    _images/auto-heal-executions.png
  2. On the Deployments blade, find the slave or the layer deployment that was healed, click the healed deployment to open it, and then scroll down to the Nodes pane.

  3. In the Nodes pane, click the ve row, and if the heal ran successfully, you will see two slave/layer instances listed; the old and the new instance.

    _images/auto-heal-ve.png
  4. To purge the old instance, scroll to the top of the Deployment page, click Execute workflow, and then select Purge from the list.

    _images/auto-heal-purge.png
  5. On the popup menu, leave all default values, and click Execute. This workflow will bleed (like Admin State Disable) traffic off the old instance, remove it from the layer, leaving you with only the new instance.

    _images/auto-heal-postpurge.png

What’s Next?

High Availability Guide