Microsoft 70-412: Objective 1.1 Configure Network Load Balancing

Configure Network Load Balancing (NLB)

Network Load Balancing (NLB) is a HA feature that allows a group of servers appear as one server to external clients. The server group bound through NLB is usually referred to as an NLB cluster or server farm, and each individual server in the cluster is called a host or node. Network Load Balancing improves both the availability and scalability of a service that runs on all the individual nodes.

NLB improves availability by absorbing individual server failures- NLB detects unresponsive, disconnected or dead servers and sends new client requests to the remaining functional hosts. NLB supports scalability because a group of servers in aggregate will be able to handle more traffic than any one server can. As the demand for a service such as IIS grows, more nodes can be added to accommodate the increased workload.

Important Note: Each client is sent to an individual node in the cluster upon connection. This means that NLB clusters don’t aggregate resources together, they just facilitate the initial client connection to a server. A different clustering technology should be used for stateful applications such as database servers because data updates and changes would result in a different experience if the client next connects to a different node.

I’m changing the order of items in the blueprint for a more logical learning flow.

Other Media:

 

TABLE OF CONTENTS

1.1 Configure NLB prerequisites
1.2 Install NLB nodes
1.3 Create new NLB Cluster
1.4 Configure cluster operation mode
1.5 Configure Affinity
1.6 Upgrade an NLB Cluster

 

1) Configure NLB prerequisites

    • At least one network adapter for load balancing. Preferably 2 to separate NLB and normal network traffic.
    • Static IP addresses.
    • Only TCP/IP used on the adapter for which NLB is enabled. Do not add any other protocols (for example, IPX) to the NLB adapter.
    • All hosts in the NLB cluster must reside on the same subnet.

Back to Table of Contents

 

2) Install NLB Nodes

    • Open Server Manager and go to “Add Roles and Features”

      Open Server Manager and select Add Roles and Features Microsoft 70-412 Exam Walkthrough Certification

      Open Server Manager and select Add Roles and Features

    • In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation”

      In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation"

      In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation”

    • On the Server Selection screen, select the local server:

      On the Server Selection screen, select the local server:

      On the Server Selection screen, select the local server

    • Skip over Server Roles since NLB is a Windows Feature. On the Feature Selection page, check the box for Network Load Balancer:

      On the Feature Selection page, check the box for Network Load Balancer

      On the Feature Selection page, check the box for Network Load Balancer

    • Click “Add Features” on the screen that follows

      Click “Add Features” on this screen

      Click “Add Features” on this screen

    • Repeat for all of the nodes that will participate in the NLB cluster
    • You can also install via Elevated PowerShell: PS> Install-WindowsFeature NLB -IncludeManagementTools

Back to Table of Contents

 

3) Create new NLB Cluster

  • Now that the NLB feature is installed, launch the Network Load Balancing Manager.

    Open NLM Manager

    Open NLM Manager

  • Right-click the item ‘Network Load Balancing Clusters’ and select ‘New Cluster’:

    Create new NLB Cluster

    Create new NLB Cluster

  • The next screen asks for the name or IP of the first node in your new NLB cluster. In this example, I’ve entered localhost. Select the correct interface for NLB traffic and select ‘Next’:
    New Cluster Screen 1

    New Cluster Screen 1

    PRODUCTION TIP: I doubt this will be on the test, but best practices would have at least 2 interfaces for each cluster node: One for NLB traffic, and the other for standard network traffic. Also be sure that you’ve met all the NLB Prerequisites.

  • The next screen shows host parameters:
    Manage node parameters

    Manage node parameters

    There are three important configuration options on this page that I’ll walk you through:

    Priority (unique host identifier) drop down: A value from 1-32 that is host-unique. The priority setting essentially determines the order of hosts to handle non-load balanced network traffic- If the host with priority 1 is unavailable, then the host with the next numeric value handles that kind of traffic.

    Dedicated IP Address: You can modify the hosts IP address from this screen- in practice, I’ve only done this if the interface used for NLB traffic has more than one IP address assigned to it. Keep in mind that this is the IP address for the HOST, not the cluster IP address.

    Initial Host State: Started, Suspended, or Stopped. This setting determines the NLB status for this node. Default is started.

  • The next screen is the beginning of cluster configuration options. On this page, click ‘Add’ to configure a cluster IP address. The address you close will be the Virtual IP address that will be used to connect to the whole NLB cluster. The IP address must be on the same logical subnet as the host IP address(es) chosen on the previous wizard page.

    Click the Add button and configure the NLB VIP.

    Click the Add button and configure the NLB VIP.

  • The next screen configures the cluster IP/DNS settings as well as the cluster mode. It’s also the next sub-objective:

Back to Table of Contents

 

4) Configure cluster operation mode

  • Now that the NLB feature is installed, launch the Network Load Balancing Manager.Configure the cluster IP/DNS settings as well as the cluster mode.

Cluster IP Configuration: Relatively straight forward section here: Verify that the listed IP address is the virtual IP that you want used for the NLB, and add a FQDN (Fully-qualified domain name) for the cluster. Register the FQDN with the DNS server of your choosing, but that’s outside of the scope of this walkthrough.

  • Cluster operation mode: Here, you set the operation of the NLB with radio buttons.
    • Unicast: This is the default. The NLB clusters’ virtual MAC address will replace the MAC address on each individual hosts NLB NIC. Some fanciness happens to all of the outgoing network packets to prevent upstream switches from discovering that all of the cluster nodes functionally have the same MAC address. Unicast mode requires a second NIC for communication between cluster nodes.
      • In practice, Unicast mode has a few disadvantages:
        • Requires 2 NICs- one for NLB traffic and one for peer communications.
        • Incoming NLB packets are sent to all the ports on the switch, possible causing switch flooding.
        • Due to switch flooding, VMware and other hypervisors recommend Multicast mode. Single VM Migration is not supported.
    • Multicast: Each host keeps individual hardware MAC addresses- the Cluster MAC address is assigned to all adapters and used as a multicast address with each host translating into the local NIC MAC.  Local communication is not affected because each host retains a unique hardware address.
      • Multicast has a few disadvantages as well:
        • Upstream routers will require a static ARP entry. This is because this cluster mode resolves a unicast IP with a multicast MAC address.
        • Without IGMP, switches may need additional configuration for sending multicast traffic to the appropriate switch ports
        • Some older switches and routers do not support mapping unicast IP to multicast MAC. In these situations the hardware will need to be replaced to use Multicast NLB.
    • IGMP Multicast – Similar to standard multicast, only allowing compatible switches to examine the contents of the multicast packets in a method to control switch flooding called IGMP Snooping.
      • Everything has trade offs, IGMP Multicast is no different:
        • Requires more complicated upstream switch configuration and enabling of multicasting routing.

Back to Table of Contents

5) Configure affinity

  • Configure port rules and affinity

Port rules define what traffic will be handled by the NLB, and how it will be load balanced. Port rule definitions match incoming traffic by a range of destination TCP or UDP ports and possibly a destination IP address. Only one rule can be applied to incoming traffic, so creating a rule conflict isn’t possible.

The default shown above is usually okay for production- basically it load balances all traffic…. but I imagine the exam will require more granular controls. Lets create a new port rule:

  • Configuring port rules and cluster affinity settings

Lets go over the options:

  • Cluster IP address: By default new port rules match all of the NLB Cluster’s IP addresses, but if your cluster has multiple IP addresses assigned, you can limit a rule to a specific IP address here.
  • Port Range and Protocols: Pretty self-explanatory. To have a rule for a specific port number, set the From and To to the same number (For example, From: 443 To: 443). Select TCP, UDP, or Both to handle the protocol used for communication. The ranges you define cannot overlap existing rules.
  • Filtering Mode: This portion defines how incoming traffic is divided up for cluster nodes. Why is it called Filtering instead of something more explanatory? Good question I say.
    The Multiple Host filtering mode is the default and has additional Affinity requirements and optional Timeout settings.

    • Affinity settings: Affinity affects how client interaction with the NLB cluster is handled, specifically around session state
      • None: Multiple requests from the same client can access any of the NLB nodes
        • With no affinity, the nodes should be balanced fairly evenly and provides the best performance. The services being load balanced must be stateless or subsequent connections will be made to other nodes and give unpredictable results because the session data isn’t present.
      • Single: Multiple requests from the same client must be handled by the same NLB node – “Sticky Sessions”.
        • With Single Affinity, once a client establishes a connection to a cluster node, subsequent connections will go to the same node. Because of this, client state is maintained across multiple TCP connections.
      • Network: Multiple requests from the same TCP/IP address range must access the same node- Usually used for internet facing clusters.
        • Network affinity is the same as Single affinity, but applied to a network range instead of individual client.
  • The Single Host filtering mode directs all traffic to the host with the highest priority. If that host fails, the traffic is directed to the next lowest priority host.
  • The Disable this Port Range setting will force the traffic in the range to be dropped.The Timeout setting (applicable to Multiple Host filtering) is used to protect clients from changes to the NLB settings during a session. If a client connects to the NLB while configured for Multiple Host filtering with Single affinity and a timeout of 15 minutes, a change to Multiple Host filtering with No affinity will not affect them until the timeout is reached.

    When editing an existing rule on an individual node, you get a slightly different screen:

    Editing existing rule on an individual node

    Editing existing rule on an individual node

    This screen introduces Load Weight and Handling Priority.

  • Load Weight: The default setting is Equal, but modifying this will change the distribution of the port rule traffic- you can assign a greater or lessor than 50 weight to traffic and take more or less than equal share, in that order.
  • Handling Priority: Only available in Single Host filtering. This is the order that port range traffic is sent to cluster nodes. If there is no value here, check the cluster settings for this host.
  • IMPORTANT: I’ve seen practice questions that play on the similarities of Handling Priority and Host Priority. Remember that Handling priority only applies to Single Host filtering.

 

6) Upgrade an NLB cluster

I’m going to make an assumption here, since there isn’t an “NLB Version” or something similar. The assumption is upgrading an NLB cluster that was configured on a previous Windows Server version to cluster nodes running Windows Server 2012 r2.

I think you have two options to accomplish that- a disruptive upgrade (Taking the cluster down, upgrading the hosts and then building a new cluster) and a less disruptive rolling upgrade.

  • Disruptive:
    • Take the cluster offline, and upgrade each host one-by-one to server 2012 r2. Once complete, connect the upgraded hosts to the cluster. Naturally the cluster cannot service connections during this operation.
  • Rolling Upgrade:
    • Leave the cluster online and drain each node of existing connections. A Drainstop (Right-click a node in the cluster and select Drainstop in the Control Hosts menu) will also refuse new connection so use it wisely.

      Upgrade hosts - using Drainstop

      Upgrade hosts – using Drainstop

    • Upgrade the host to Server 2012 r2
    • Click ‘Start’ on the node after the upgrade is complete.

Back to Table of Contents

Advertisements

One comment

Comments are closed.