6.4. Deployment Recommended Practices¶
6.4.1. What it is¶
In this guide we’ve explored all (or at least most) of the ways you can deploy SSL Orchestrator topologies. In the following, we will expand on these with a set of “recommended practices”. None of these are required, but merely included here as optimizations.
6.4.2. Service network isolation¶
Consider how security devices are connected to an SSL Orchestrator appliance. For inline layer 2 devices, this is physical connectivity from one BIG-IP interface, through the layer 2 device, and back to a separate BIG-IP interface. There may be additional switching in the path as well, but essentially this is a layer 3 hop across a layer 2 path.
But for layer 3 security services, you define an IP address on the security service to send the traffic to, and it routes the traffic back. In the layer 3 and HTTP service configurations, an “Auto Manage Addresses” feature exists that, when enabled, auto-defines a set of internal (RFC 2544) addresses to use for the security service. You may have wondered why this exists, and if it’s necessary. Simply put, it is not explicitly required that you use the auto manage function, and indeed there are situations where you would not or could not use it. But then consider the scenario where SSL Orchestrator is deployed in an existing network, where a set of layer 3/HTTP security devices already exist. It is tempting to want to keep these devices where they are in the network, and while this is technically possible, it exposes a significant risk. SSL Orchestrator would be sending decrypted traffic to these devices, that would then be passing across an existing network. Any other entities on that network would then potentially have access to that sensitive information. The auto manage function, again while not expressly required, provides a secure solution to this challenge by creating an isolated network enclave for these devices to reside. It does require you to move your security devices from their existing places in the network, but at the benefit of isolating and protecting the decrypted data, plus the ability to now independently scale these devices. Each security service inhabits its own enclave, so they are effectively isolated from everything else except the defined BIG-IP interfaces.

Figure 95: Networking to an Inline Layer 3 Device
The one specific exception to this is in cloud deployments, where all objects in an AWS VPC, for example, must exist within a single /8 subnet. The auto manage function will use RFC 2544 addresses by default, which would not work in this environment. In an AWS VPC, you would need to disable auto manage addressing and carve out a small subnet for each layer 3/HTTP security service.
The recommended practice is this case, except where noted above, is that you continue to use the auto manage address function for inline layer 3/HTTP services, or otherwise take the necessary precautions to protect the sensitive decrypted traffic flowing to these services.
6.4.3. Deploying via traffic segmentation¶
The SSL Orchestrator dynamic service chain architecture presents a unique strategy for deploying services. In this model, services are independently addressable and scalable. It becomes easy then to add, remove, and scale security devices, at will, simply by managing their respective resource pools, and with virtually no downtime.

Figure 96: Security Service Resilience in a Dynamic Service Chain Architecture
So, while that certainly covers the security services, you may be asking how the same sort of resiliency can be applied to the SSL Orchestrator itself. Understandably, a high availability architecture is essential. As with all critical network devices, you need two to make sure that traffic continues to flow in the event of a device failure.

Figure 97: SL Orchestrator and Security Service Resilience
But then we can take this resiliency a step further. The SSL Orchestrator sits inline to your traffic flow, and rules govern how traffic is handled: allowed/blocked, TLS intercepted/bypassed, and service chained. When you deploy a new firewall into a network environment, it’s usually a good practice to not be too restrictive at first, to ensure that you have basic connectivity before ratcheting down security rules.
The same concept can and should be applied when deploying SSL Orchestrator. Establish basic connectivity before applying specific traffic handling functions. In short, before doing anything else, establish a basic security policy rule that allows, bypasses TLS, and does not send to a service chain. Make sure that traffic flows as intended. Once you have good connectivity, introduce a small portion of traffic to a more defined security policy rule. This is a “segmented” deployment. Here are a few segmentation options to consider:
- A specific subnet in your environment. Some companies divide IP subnets by floors in their building, for example.
- A specific set of source IP addresses, maybe just a small group of tester IPs.
- A specific URL category.
Create a rule that allows, intercepts TLS, and sends to a service chain. If segmenting by source IP, subnet or specific URL category, create this traffic condition for the rule, such that ONLY this traffic condition is TLS intercepted and service chained, while everything else is still bypassed. You can also define a single source IP subnet in the Interception Rule.
Not all traffic is equal. Sure, it’s usually TCP, UDP, ICMP, ARP, etc., but then there are things that just don’t behave well in some network situations. Some protocols, however rare and typically unique to an organization, simply cannot be decrypted. Deploying an SSL decryption solution is generally more about understanding the traffic flowing through your network, than the solution itself. And it is far better to catch and mitigate these protocols or applications before it becomes a major issue. Segmented deployments are a great way to handle this, and further allows you to test on production traffic. As you work through any issues, slowly introduce additional traffic segments. If something breaks, simply back off the last change. As you get closer to full traffic processing, most of the issues will have been addressed, leaving a clear runway for a successful deployment.