F5 – Load Balancing

When designing an F5 Load Balancing solution, there are several options to consider. Here are the primary ones:

  1. Load Balancing Method: F5 supports several load balancing methods, including round robin, least connections, predictive, fastest, observed, ratio, dynamic ratio, etc. Your choice here depends on your application’s specific requirements.
  2. Persistence: Persistence is another important consideration in F5 load balancing. It ensures that once a client has been directed to a particular server, subsequent requests from that client will be sent to the same server. There are several types of persistence to consider, such as source address affinity persistence, cookie persistence, SSL session ID persistence, etc.
  3. Health Checks/Monitoring: F5 offers a wide variety of health checks and monitoring options. You can monitor your servers using simple options like ICMP or more complex application-level checks like HTTP, HTTPS, FTP, etc. F5 can also send specific queries to databases or run a script to ensure the server is operating correctly.
  4. 4. Virtual IPs and Pools: You have to create Virtual IPs (VIPs) to receive client traffic and direct it to the server pools. Server pools are groups of servers that host the same application content.
  5. High Availability: Designing for high availability is key in any load balancing solution. F5 offers active-standby or active-active configurations for redundancy.
  6. Security Options: F5 load balancers can also act as full proxy and perform functions like SSL offloading/bridging, HTTP security (including CSRF, XSS, RFI, LFI protection), DDoS protection, IP intelligence, etc.
  7. SSL Offloading: This is a method of removing the SSL based encryption from incoming traffic to relieve a web server of the processing burden of decrypting/encrypting traffic sent via SSL.
  8. iRules: iRules is a flexible, event-driven, programmable network scripting language from F5. It allows you to customize how you intercept, inspect, transform, and direct inbound or outbound application traffic.
  9. Scalability: Consider how your F5 load balancing solution can scale to handle increased traffic in the future. This might involve setting up additional F5 appliances in a cluster, or it might involve cloud-based solutions, like F5’s BIG-IP Cloud Edition.
  10. Integration with other F5 Solutions: Consider if your load balancing solution will need to integrate with other F5 solutions, such as BIG-IP DNS for global load balancing, BIG-IP Application Security Manager for WAF capabilities, etc.

Remember, the correct design is subjective and largely depends on the needs of your specific environment and applications.



These four modes determine how traffic is handled in the network, and each mode has its advantages and disadvantages depending on the specific use case.

  1. Routed Mode: In this mode, F5 BIG-IP behaves as a router. It makes forwarding decisions based on layer 3 information (IP addresses). Both the client-side and server-side subnets are different, and F5 has a route to both. It uses its routing table to determine where to send packets. This mode supports all load balancing methods but requires additional IP addresses for Virtual Servers.
  2. Bridged Mode (Transparent): This is also known as a Layer 2 forwarding. Here, BIG-IP is acting as a transparent bridge between client and servers. It makes its decisions based on MAC addresses and doesn’t alter the IP address. This mode can be useful when you can’t change the network IP schema or in firewall scenarios.
  3. SNAT (Secure Network Address Translation): In this mode, the source IP addresses are changed when traffic passes through the F5. The F5 BIG-IP system substitutes its own IP address for the source address in each packet. This can be helpful in scenarios where the pool members (back-end servers) need to have a consistent IP to respond back to or where the servers do not have a route back to the clients except via the F5.
  4. nPath Routing (Direct Server Return): This method allows the server to respond directly to the client, bypassing the load balancer on the return path. The idea is to free up resources on the load balancer by offloading the server response traffic. This is beneficial in situations where there is a significant amount of return traffic, such as video streaming or large data transfers. The downside of nPath is that it can be more complicated to implement, and it doesn’t support some features, like persistence or response-based load balancing decisions.

Choosing the right mode depends on your network topology, application requirements, and how you want to handle client connections and server responses. It’s best to consult with a network or F5 specialist to make the best decision for your specific use case.

SNAT “Same Subnet” vs “Different Subnet”

SNAT, or Source Network Address Translation, is a type of NAT (Network Address Translation) where the source IP addresses are modified. It’s commonly used in load balancing to replace the client’s source IP address with an IP address from the load balancer so that a server always responds back to the load balancer.

When designing your SNAT, the decision to place them on the same or different subnets as your servers depends on your network design and business needs. Here are some pros and cons for both setups:

SNAT on the Same Subnet as Servers:


  • Simplified Routing: Since the SNATs and servers are on the same subnet, routing configurations are less complex.
  • Easier Management: Being on the same subnet could mean easier management, with less network configurations to worry about.


  • Broadcast Traffic: With SNATs and servers on the same subnet, there could be more broadcast traffic, which could impact network performance.
  • Security: If a security breach occurs, it could potentially affect all devices on the subnet.

SNAT on Different Subnets than Servers:


  • Security: Placing SNATs on a different subnet can add a layer of security, as it could provide segmentation and isolation of different network components.
  • Traffic Management: It can help better manage traffic by preventing unnecessary broadcast traffic from reaching the servers.


  • Complex Configuration: More complex routing and network configurations are required.
  • Management: More effort is required to manage the network infrastructure due to the increased complexity.

Best Practices:

There isn’t a one-size-fits-all answer for this, as it really depends on the specifics of your network and business requirements. However, a common best practice is to segregate different types of network devices onto different subnets for better security and traffic management. This usually means having SNATs on different subnets from the servers.

Finally, it’s also important to take into consideration the networking capabilities of your infrastructure (virtual or physical), the load balancing techniques being used, and other specific details of your use case when deciding where to place SNATs in relation to servers.



The most commonly used modes in F5 BIG-IP load balancing solutions would be Routed Mode and SNAT (Secure Network Address Translation).

Routed Mode: This is commonly used because it fits well with standard networking concepts and practices. F5 BIG-IP in this mode acts as a router, making decisions based on IP addresses. It integrates well with most network designs.

SNAT: This mode is often used because it gives the load balancer more control over the traffic flow and is especially useful when the servers do not have a route back to the client network other than through the F5 device. It helps manage situations where you need to ensure responses from your servers get back to the client correctly, even if your servers are on different subnets or networks.

That said, the “most used” can vary significantly based on the specific needs and constraints of individual network environments and application requirements. Different use cases and requirements may lead to different choices. Always analyze your specific needs, requirements, and constraints before making a decision.



Sure, let’s dive into the details of these two modes: Routed Mode and SNAT.

Routed Mode:

In Routed Mode, the F5 BIG-IP system operates as a router and makes decisions based on Layer 3 (IP addresses).

Here is a typical traffic flow in Routed Mode:

  1. A client sends a request to access a service. The destination IP is the Virtual Server IP (VIP) hosted on the F5 BIG-IP device.
  2. The F5 device receives the request and uses the load balancing method defined to select a server from the backend pool.
  3. F5 changes the destination IP from the VIP to the selected server’s IP and then forwards the packet to the server.
  4. The server processes the request and sends a response. The destination IP for this response is the original client’s IP.
  5. The response goes back through the F5 device, which then forwards it back to the client without making any changes to the IP addresses.
  6. The client receives the server’s response.

Routed Mode requires that both the client network and server network have a route to the F5 device, and that the F5 device has routes to the client and server networks.


SNAT (Secure Network Address Translation):

In SNAT mode, F5 BIG-IP changes the source IP address when traffic passes through it.

Here is a typical traffic flow in SNAT mode:

  1. A client sends a request to access a service. The destination IP is the VIP on the F5 BIG-IP.
  2. The F5 device receives the request and selects a server from the backend pool based on the load balancing method defined.
  3. F5 then performs Source Network Address Translation (SNAT). It changes the source IP from the client’s IP to a SNAT IP configured on the F5. The SNAT IP should be an IP that the server can route back to.
  4. F5 also changes the destination IP from the VIP to the selected server’s IP and forwards the packet to the server.
  5. The server processes the request and sends a response. However, the destination IP for this response is the SNAT IP (not the client’s IP), as this was the source IP in the packet it received.
  6. The response goes back through the F5 device, which then changes the source IP from the SNAT IP back to the VIP and changes the destination IP from the SNAT IP to the original client’s IP.
  7. The client receives the response from the server.

In SNAT mode, the server’s responses are forced to go back through the F5 device because their destination IP is the SNAT IP. This mode is beneficial when servers cannot directly access client networks or when you want the load balancer to handle all traffic to perform additional functions.