4.6. Creating Service Channels for Inline Services

4.6.1. What it is

Security devices attached to SSL Orchestrator are opaque to the external environment. They are protected, isolated, and do not generally interact outside of internal connectivity with the F5 BIG-IP. While this design is intended to protect the security devices and the sensitive decrypted traffic flowing through them, it also presents a significant consideration.

Specifically, SSL Orchestrator dynamically steers traffic through security services by virtue of a signaling mechanism, such that only managed (signaled) traffic can flow through the service chain. Should a security device require its own connectivity to external resources, the device-initiated flows would be un-managed and thus, not allowed. There are scenarios, however, where a security device would still need to access external resources.

For example, an explicit proxy device would need access to DNS. Malware detection devices would need to be able to update signatures. And many security products require phone-home access to validate licensing. In these cases, it is necessary to create “service control channels”, pathways to allow device-initiated traffic to egress to the Internet.

Note that service control channels are generally only required for inline L3 and HTTP-type services (however they could also be needed by inline L2 services).

4.6.2. How to build it

To better understand how service control channels work, it is necessary to first understand how traffic flows through an inline security service within the SSL Orchestrator service chain. Inline devices are inline because they represent a path through the device. Normally, that means traffic enters one interface and exits a second interface. A single interface can also be used if the layer 3/HTTP device supports 802.1Q VLAN tagging (separating a single interface into two logical VLANs and subnets) or, in rare cases can be deployed “one-arm” if the device source NATs on egress. When you create an inline service in the SSL Orchestrator UI, that service minimally creates two virtual servers:

  • An “inbound” or “to-service” virtual server that sends traffic to the inline device

  • An “outbound” or “from-service” virtual server that receives traffic back from the inline device


Figure 76: Service Virtual Servers

The above image represents an inline HTTP service from the perspective of LTM virtual servers:

  • When instructed to do so by the active service chain, traffic flow attaches to the ssloS_MWG-t-4 virtual server which then load balances the traffic across the inline services. The -t-4 here represents a TCP IPv4 virtual server. There could also be TCP (-t) and UDP (-u), IPv4 (-4), and IPv6 (-6) combinations depending on configuration.

  • An inline layer 3/HTTP service gateway routes back to the F5 BIG-IP, and its gateway is the VLAN attached to the ssloS_MWG-D-0-t-4 virtual server. Again, -t-4 represents a TCP IPv4 virtual server with -D meaning “destination”. This virtual server is responsible for re-attaching the traffic to current flow context so that it can be attached to other devices in the service chain.

With the above flow in mind, it should be clear that the -D-0- virtual server consumes the service return data specifically because it listens on the VLAN attributed to “from-service” traffic. The -D-0- virtual server is a wildcard listener (, meaning it listens for TCP IPv4 traffic on any source IP, any destination IP, and any destination port. So, to create a service control channel, you must create a new virtual server listening on this same “from-service” VLAN, but that is more specific than the wildcard -D-0- listener. Specificity in this case would be some combination of source IP, destination IP, destination port, and layer 4 protocol (TCP/UDP).

For example, to allow an inline proxy device to talk to an external DNS, you might create a service control channel virtual server with the following characteristics:

  • Protocol: UDP

  • Source IP: the service’s from-service self IP

  • Destination IP:

  • Destination port: 53

It is almost always most useful, in this case, to a) define the service’s IP in the source field, and b) disable source NAT on the device so that client-server traffic flowing through maintains the client’s true address. In this case, anything on the device’s IP would be device-initiated traffic. Some security products call this “SNAT”, “source NAT”, “secure NAT”, or “IP spoofing”. In any case, when possible, you should disable source NAT. Or if the term “IP spoofing” is used, you should enable that. Disabling source NAT also makes creating service control channels easier as you only need to define one to listen on the device’s source IP, any destination IP, any port, and any protocol.

To create the service control channel from the above DNS example, navigate to Local Traffic -> Virtual Servers in the F5 BIG-IP UI. Any setting not specified below can be left as is.

Control Channel Virtual Server

User Input


Provide a unique name.


Select Performance (Layer 4) here. No specific processing is needed, so FastL4 is the most optimal path to egress.

Source Address

Enter the source IP of the inline service’s “from-service” interface. If there are multiple devices, you can enter a subnet (ex.

Destination Address/Mask

Enter, assuming the inline service is also using this IP for external DNS. Again, with SNAT disabled on the inline service, it may simply be possible to define a wildcard here ( to allow traffic to any destination.

Service Port

Enter 53, assuming this is the service port the inline service will send to. As with the above SNAT recommendation, this could also define a wildcard (*) to allow traffic to any destination port.


Enter UDP, assuming DNS requests will be on UDP.

VLAN and Tunnel Traffic

Select the service’s from-service VLAN. This is most critical setting. Traffic leaves the service on this VLAN, and normally consumed by -D-0- virtual service bound to that VLAN. This service control channel must listen on this same VLAN and represent a more specific path for outbound traffic.

Source Address Translation

If the SSL Orchestrator topology requires source NAT to egress, the service control channels will too. Enable this setting as required for outbound traffic flow.

Address Translation

Select Disabled.

Port Translation

Select Disabled.

Default Pool

Select an existing gateway pool such as the one created by the SSL Orchestrator topology (or create a new one).

Note that selecting a pool may also enable address and port translation. Check these again after selecting the pool.

Click the Finished button to complete the service control channel virtual server configuration. Depending on your environment, you may need to create multiple service control channels. However, as previously stated, if source NAT can be disabled on the inline device, it will usually only be necessary to create one listening on the device’s from-service interface IP address.

Also note that the service control channel binds to a VLAN, and potentially a gateway pool, used by an SSL Orchestrator-managed configuration. If the VLAN or pool was created by SSL Orchestrator, this control channel virtual server will need to unbind from this VLAN or pool if the service is to be removed or networking settings changed (since it will cause dependency errors). If VLANs are created in the BIG-IP (manually) and then consumed by the SSL Orchestrator configuration, there will be no dependency issues.

4.6.3. Video