Windows Server

Migrating VMware View vCenter to a new host

Hey everyone!

The end of support for Windows Server 2003 is coming, and a lot of organizations are scrambling to migrate their production systems before the  July 14, 2015 deadline. Many groups are still running the vCenter (5.0 or 5.1) that VMware View utilizes on Windows Server 2003, and I was recently asked about the migration path. For a vCenter/Windows OS compatibility matrix, click here.

There are two scenarios: One where the vCenter server maintains the same hostname and IP address, and one where the name and IP change. Today’s post deals with the first scenario and tomorrows will address the second.


Migrating vCenter to a new host without VMware View downtime
IMPORTANT NOTE: Proceed at your own risk. This operation is not supported by VMware. Click HERE for the KB.

  1. Export RSA Keys from old server
    1. Open an administrative command prompt and navigate to navigate to the %windir%\Microsoft.NET\Framework\v2.0xxxxx directory
    2. The ASP.NET IIS registration tool exports the RSA public-private key pair from the SviKeyContainer container to the keys.xml file and saves the file locally. Type: aspnet_regiis -px “SviKeyContainer” “c:\keys.xml” -pri. 
    3. Copy the .XML file to the new server or network storage.
  2. Document Database user names and passwords

  3. Shutdown Virtual Center Services (And Composer if co-existing) on the vCenter server being replaced

  4. Log into the View Administrator portal and disable virtual machine provisioning.

    1. Expand View Configuration
    2. Go to Servers\vCenter Servers
    3. Select the vCenter that will be migrated, and select ‘Disable Provisioning’
  5. Perform end-to-end backups of your environment (vCenter, Composer, ADAM). KB for that HERE.
  6. Shutdown old vCenter Server.
  7. In Active Directory, delete the old vCenter computer object.
  8. On the new vCenter Server, Rename the machine to the same as the old vCenter Server, Assign is the same static IP as the old vCenter, and join to the domain.
  9. Migrate RSA Keys to New VCenter Server
    1. On the destination computer, open an administrative command prompt and navigate to the %windir%\Microsoft.NET\Framework\v2.0xxxxx directory.
    2. type: aspnet_regiis -pi “SviKeyContainer” “path\keys.xml” –exp
  10. Install SQL Native Client (sqlncli.msi)
  11. Configure ODBC System DSN Connection for VCenter (Native 64-bit) and View Composer (Native 64-bit).
  12. Perform a simple installation of the vCenter Server and components (same version as what was running on old VCenter Server)
  13. If View composer is not standalone, Install View Composer. This may be a good time to split View Composer off of the vCenter server if that’s your ultimate goal.
  14. Ensure that all services started and are running.
  15. Connect to vCenter using either the vSphere client or Web Client (Depending on version). Ensure that hosts have reconnected and everything looks as you’d expect.
  16. In View Administrator, you may need to go to the Dashboard and Verify the SSL Certificates for the new VCenter.
  17. Enable Provisioning in View Administrator (should just work)
  18. Double-check any customization specs in the new VCenter Server.
  19. Test Recomposing and Provisioning of new Linked Clones.

User experience and expected behavior

It’s not exaggerating to say that this is an intense change-the-tires-while-doing-60-on-the-highway kind of operation, but in my testing of an 25 linked clone environment there was no impact. Any existing desktop connections or new connections to existing desktops should observe little or no disruption of service.

Advertisements

MICROSOFT 70-412: OBJECTIVE 2.2.3 – Perform access-denied remediation

If a user doesn’t have access to a network resource, a file server has not historically given the most user-friendly response: an Access Denied message and an OK button. OK? No, this is not okay for the user and we can do better.

One of the improvements in Server 2012 is Access-Denied Assistance. When a user tries to access a resource that they don’t have access to, they can receive a custom message that can explain WHY they don’t have access as well as who to contact for further help…. or even a Request Assistance button to save the user from typing out an email.

This can be configured individually using File Server Resource Manager or centrally using Group Policy.

Setting Access-Denied Assistance with File Server Resource Manager

  1. Open up File Server Resource Manager, right-click on local (or connect to another server first) and select Configure Options.
  2. On the dialog that opens, select the Access-Denied Assistance tab on top:
    Microsoft 70-412 Certification Exam Blueprint Walkthrough - Dynamic Access Controls - Perform access-denied remediation
  3. Check the box next to Enable access-denied assistance
  4. If desired, you can configure email requests by selecting the button toward the top:
    Microsoft 70-412 Certification Exam Blueprint Walkthrough - Dynamic Access Controls - Perform access-denied remediation
  5. Notice the item Generate an event log entry for each email sent. This is checked by default, and we can use it to look for (and remediate) access issues.

Setting up Access-Denied Assistance using Group Policy

  1. Open Group Policy Management. In Server Manager, click Tools, and then click Group Policy Management.
  2. Right-click the appropriate Group Policy, and then click Edit.
  3. Click Computer Configuration, click Policies, click Administrative Templates, click System, and then click Access-Denied Assistance.
  4. Right-click Customize message for Access Denied errors, and then click Edit.
  5. Select the Enabled option.
  6. Configure the following options:
    1. In the Display the following message to users who are denied access box, type a message that users will see when they are denied access to a file or folder.

      You can add variables customized text:

      • [Original File Path] The original file path that was accessed by the user.
      • [Original File Path Folder] The parent folder of the original file path that was accessed by the user.
      • [Admin Email] The administrator email recipient list.
      • [Data Owner Email] The data owner email recipient list.
    2. Select the Enable users to request assistance check box.

MICROSOFT 70-412: OBJECTIVE 2.2.2 – Implement Policy Changes and Staging

This section is a bit confusing, mostly because I don’t see the exact phrasing used in relation to Dynamic Access Control.. So:

Not too sure what is being asked here. The only relevant thing I could find on TechNet was the below:
You must enable staged central access policy auditing to audit the effective access of central access policy by using proposed permissions. You configure this setting for the computer under Advanced Audit Policy Configuration in the Security Settings of a Group Policy Object (GPO). After you configure the security setting in the GPO, you can deploy the GPO to computers in your network.

If you have any idea what’s being asked here, please let us all know in the comments!

MICROSOFT 70-412: OBJECTIVE 2.2.1 – Configure user and device claim types

A claim is a unique piece of information about a user, device, or resource that has been published by a domain controller. These are very often attributes that you find if you open the properties of an object in Active Directory – things like a user’s title, department or location are claims that you can define, so is the department classification of a file, or the health state of a computer. An entity can involve more than one claim, and any combination of claims can be used to authorize access to resources. The following types of claims are available in the supported versions of Windows:

  • User claims   Active Directory attributes that are associated with a specific user.
  • Device claims   Active Directory attributes that are associated with a specific computer object.
  • Resource attributes  Global resource properties that are marked for use in authorization decisions and published in Active Directory.

Claims make it possible for administrators to make precise organization- or enterprise-wide statements about users, devices, and resources that can be incorporated in expressions, rules, and policies.

Creating a Claim:

  1. Open up the Active Directory Administrative Center. Select Dynamic Access Control from list on left:

    Microsoft 70-412 Certification Exam Blueprint Walkthrough - Dynamic Access Controls - Creating Claims

  2. Right-Click on Claim Types and select New:
    Microsoft 70-412 Certification Exam Blueprint Walkthrough - Dynamic Access Controls - Creating Claims
  3. Select the attribute you want to use for the claim – If we keep the example used when I introduced Dynamic Access Controls, we should create a claim based on the department the user works in…. Finance.
    Microsoft 70-412 Certification Exam Blueprint Walkthrough - Dynamic Access Controls - Creating Claims
  4. To keep with the scenario, I’m going to add a claim for office location (Office) and the AD VDI container:
    Microsoft 70-412 Certification Exam Blueprint Walkthrough - Dynamic Access Controls - Creating Claims
  5. ???
  6. Profit!

You would create claims to meet the business objectives for securing data- the actual attributes that you use to achieve that goal will likely be very different than what I’m using in this scenario, but I hope I’m showing you the power and flexibility afforded with setting up claims.

PowerShell:
The relevant PowerShell cmdlet for setting/reading/creating/deleting claims is ADClaimType:
Set/Get/New/Remove ADClaimType

MICROSOFT 70-412: OBJECTIVE 2.2 – Dynamic Access Controls

Dynamic Access Control is the story of file access rules (called..access rules believe it or not) based on user and device criteria (Called claims).

These rules function as logical if-then statements built on the attributes of files, users, and devices. An example:
IF a user is an employee in the finance department AND has an office at the main campus AND is connecting from a device that is located on the main campus, then s/he can access the Payroll directory”

In order to lock down access with DAC in the above scenario, the administrator will need to set up claims for each of the objects, and a corresponding access rule on the Payroll folder.
Sub-Objectives:

1) Configure user and device claim types
2) Implement policy changes and staging
3) Perform access-denied remediation
4) Configure file classification
5) Create and configure Central Access rules and policies
6) Create and configure resource properties and lists

Microsoft 70-412: Objective 2.1 – Configure Advanced File Services

Hey everyone! I’m just getting over a few days of being pretty sick, so I apologize for the delay in getting the next post of the series out to you. The content in this post was pretty deep, so it was a good post to get back in the swing of things!

Table Of Contents

1) Configure Network File System (NFS) data store
2) Configure BranchCache
3) Configure File Classification Infrastructure (FCI) using File Server Resource Manager (FSRM)
4) Configure file access auditing

(more…)

Microsoft 70-412: Objective 1.4 – Manage Virtual Machine Movement

Hooray, the last post in section 1! I hope this series is helping you study as much as it is for me!
This post deals with Objective 1.4, which handles some common Virtual machine operations:

Table of Contents:
Perform live migration
Perform quick migration
Perform storage migration
Import, Export, and Copy VMs
Configure VM network health protection
Configure drain on shutdown

(more…)

MICROSOFT 70-412: OBJECTIVE 1.3 Manage Failover Clustering Roles

MCSA 70-412: 1.3 Configure Failover Clustering

It’s Friday, and there isn’t a terribly large amount of content for this portion of Failover Clusters. This post may be the smallest of the series, but time will tell!

 

Table of Contents:
1) Configure role-specific settings, including continuously available shares
2) Configure virtual machine (VM) monitoring
3) Configure failover and preference settings
4) Configure guest clustering

(more…)

Microsoft 70-412: Objective 1.2 Configure Failover Clustering

MCSA 70-412: 1.2 Configure Failover Clustering

A failover cluster is a group of independent servers that run a highly available service or application (called a clustered role). If one or more nodes fail, the other nodes begin to provide the services in their place. there is service reliability as well: if a cluster role becomes unresponsive for any reason, it can be restarted or brought up on another node.

Unlike the Network Load Balancer feature, a Windows Failover Cluster is designed to provide true high availability to mission critical applications. There are important differences between NLB clusters and failover clusters; where nodes in an NLB are all running the same application and load balancing between them, a Windows failover cluster has only one server running the role with the remaining cluster members waiting to take over if needed.

Additionally, failover cluster introduce shared storage amongst the cluster nodes- this is ideal for application and data consistency. Although not limited to these roles, you will traditionally find Windows failover clusters protecting database server, mail servers and file servers.

Looking over the Exam objectives, I’m somewhat surprised that the exam (allegedly) doesn’t include the initial set up of a Failover Cluster. I’m including a full walkthrough as an addendum.

Table of Contents

1. Configure quorum
2. Configure cluster networking
3. Configure cluster storage
4. Configure storage spaces
5. Configure and optimize clustered shared volumes
6. Implement Cluster-Aware Updating
7. Configure clusters without network names
8. Upgrade a cluster
9. Restore single node or cluster configuration

Addendum: Full installation
Addendum: Powershell cmdlets for Failover Clusters

(more…)

Microsoft 70-412: Objective 1.1 Configure Network Load Balancing

Configure Network Load Balancing (NLB)

Network Load Balancing (NLB) is a HA feature that allows a group of servers appear as one server to external clients. The server group bound through NLB is usually referred to as an NLB cluster or server farm, and each individual server in the cluster is called a host or node. Network Load Balancing improves both the availability and scalability of a service that runs on all the individual nodes.

NLB improves availability by absorbing individual server failures- NLB detects unresponsive, disconnected or dead servers and sends new client requests to the remaining functional hosts. NLB supports scalability because a group of servers in aggregate will be able to handle more traffic than any one server can. As the demand for a service such as IIS grows, more nodes can be added to accommodate the increased workload.

Important Note: Each client is sent to an individual node in the cluster upon connection. This means that NLB clusters don’t aggregate resources together, they just facilitate the initial client connection to a server. A different clustering technology should be used for stateful applications such as database servers because data updates and changes would result in a different experience if the client next connects to a different node.

I’m changing the order of items in the blueprint for a more logical learning flow.

Other Media:

 

TABLE OF CONTENTS

1.1 Configure NLB prerequisites
1.2 Install NLB nodes
1.3 Create new NLB Cluster
1.4 Configure cluster operation mode
1.5 Configure Affinity
1.6 Upgrade an NLB Cluster

 

1) Configure NLB prerequisites

    • At least one network adapter for load balancing. Preferably 2 to separate NLB and normal network traffic.
    • Static IP addresses.
    • Only TCP/IP used on the adapter for which NLB is enabled. Do not add any other protocols (for example, IPX) to the NLB adapter.
    • All hosts in the NLB cluster must reside on the same subnet.

Back to Table of Contents

 

2) Install NLB Nodes

    • Open Server Manager and go to “Add Roles and Features”

      Open Server Manager and select Add Roles and Features Microsoft 70-412 Exam Walkthrough Certification

      Open Server Manager and select Add Roles and Features

    • In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation”

      In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation"

      In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation”

    • On the Server Selection screen, select the local server:

      On the Server Selection screen, select the local server:

      On the Server Selection screen, select the local server

    • Skip over Server Roles since NLB is a Windows Feature. On the Feature Selection page, check the box for Network Load Balancer:

      On the Feature Selection page, check the box for Network Load Balancer

      On the Feature Selection page, check the box for Network Load Balancer

    • Click “Add Features” on the screen that follows

      Click “Add Features” on this screen

      Click “Add Features” on this screen

    • Repeat for all of the nodes that will participate in the NLB cluster
    • You can also install via Elevated PowerShell: PS> Install-WindowsFeature NLB -IncludeManagementTools

Back to Table of Contents

 

3) Create new NLB Cluster

  • Now that the NLB feature is installed, launch the Network Load Balancing Manager.

    Open NLM Manager

    Open NLM Manager

  • Right-click the item ‘Network Load Balancing Clusters’ and select ‘New Cluster’:

    Create new NLB Cluster

    Create new NLB Cluster

  • The next screen asks for the name or IP of the first node in your new NLB cluster. In this example, I’ve entered localhost. Select the correct interface for NLB traffic and select ‘Next’:
    New Cluster Screen 1

    New Cluster Screen 1

    PRODUCTION TIP: I doubt this will be on the test, but best practices would have at least 2 interfaces for each cluster node: One for NLB traffic, and the other for standard network traffic. Also be sure that you’ve met all the NLB Prerequisites.

  • The next screen shows host parameters:
    Manage node parameters

    Manage node parameters

    There are three important configuration options on this page that I’ll walk you through:

    Priority (unique host identifier) drop down: A value from 1-32 that is host-unique. The priority setting essentially determines the order of hosts to handle non-load balanced network traffic- If the host with priority 1 is unavailable, then the host with the next numeric value handles that kind of traffic.

    Dedicated IP Address: You can modify the hosts IP address from this screen- in practice, I’ve only done this if the interface used for NLB traffic has more than one IP address assigned to it. Keep in mind that this is the IP address for the HOST, not the cluster IP address.

    Initial Host State: Started, Suspended, or Stopped. This setting determines the NLB status for this node. Default is started.

  • The next screen is the beginning of cluster configuration options. On this page, click ‘Add’ to configure a cluster IP address. The address you close will be the Virtual IP address that will be used to connect to the whole NLB cluster. The IP address must be on the same logical subnet as the host IP address(es) chosen on the previous wizard page.

    Click the Add button and configure the NLB VIP.

    Click the Add button and configure the NLB VIP.

  • The next screen configures the cluster IP/DNS settings as well as the cluster mode. It’s also the next sub-objective:

Back to Table of Contents

 

4) Configure cluster operation mode

  • Now that the NLB feature is installed, launch the Network Load Balancing Manager.Configure the cluster IP/DNS settings as well as the cluster mode.

Cluster IP Configuration: Relatively straight forward section here: Verify that the listed IP address is the virtual IP that you want used for the NLB, and add a FQDN (Fully-qualified domain name) for the cluster. Register the FQDN with the DNS server of your choosing, but that’s outside of the scope of this walkthrough.

  • Cluster operation mode: Here, you set the operation of the NLB with radio buttons.
    • Unicast: This is the default. The NLB clusters’ virtual MAC address will replace the MAC address on each individual hosts NLB NIC. Some fanciness happens to all of the outgoing network packets to prevent upstream switches from discovering that all of the cluster nodes functionally have the same MAC address. Unicast mode requires a second NIC for communication between cluster nodes.
      • In practice, Unicast mode has a few disadvantages:
        • Requires 2 NICs- one for NLB traffic and one for peer communications.
        • Incoming NLB packets are sent to all the ports on the switch, possible causing switch flooding.
        • Due to switch flooding, VMware and other hypervisors recommend Multicast mode. Single VM Migration is not supported.
    • Multicast: Each host keeps individual hardware MAC addresses- the Cluster MAC address is assigned to all adapters and used as a multicast address with each host translating into the local NIC MAC.  Local communication is not affected because each host retains a unique hardware address.
      • Multicast has a few disadvantages as well:
        • Upstream routers will require a static ARP entry. This is because this cluster mode resolves a unicast IP with a multicast MAC address.
        • Without IGMP, switches may need additional configuration for sending multicast traffic to the appropriate switch ports
        • Some older switches and routers do not support mapping unicast IP to multicast MAC. In these situations the hardware will need to be replaced to use Multicast NLB.
    • IGMP Multicast – Similar to standard multicast, only allowing compatible switches to examine the contents of the multicast packets in a method to control switch flooding called IGMP Snooping.
      • Everything has trade offs, IGMP Multicast is no different:
        • Requires more complicated upstream switch configuration and enabling of multicasting routing.

Back to Table of Contents

5) Configure affinity

  • Configure port rules and affinity

Port rules define what traffic will be handled by the NLB, and how it will be load balanced. Port rule definitions match incoming traffic by a range of destination TCP or UDP ports and possibly a destination IP address. Only one rule can be applied to incoming traffic, so creating a rule conflict isn’t possible.

The default shown above is usually okay for production- basically it load balances all traffic…. but I imagine the exam will require more granular controls. Lets create a new port rule:

  • Configuring port rules and cluster affinity settings

Lets go over the options:

  • Cluster IP address: By default new port rules match all of the NLB Cluster’s IP addresses, but if your cluster has multiple IP addresses assigned, you can limit a rule to a specific IP address here.
  • Port Range and Protocols: Pretty self-explanatory. To have a rule for a specific port number, set the From and To to the same number (For example, From: 443 To: 443). Select TCP, UDP, or Both to handle the protocol used for communication. The ranges you define cannot overlap existing rules.
  • Filtering Mode: This portion defines how incoming traffic is divided up for cluster nodes. Why is it called Filtering instead of something more explanatory? Good question I say.
    The Multiple Host filtering mode is the default and has additional Affinity requirements and optional Timeout settings.

    • Affinity settings: Affinity affects how client interaction with the NLB cluster is handled, specifically around session state
      • None: Multiple requests from the same client can access any of the NLB nodes
        • With no affinity, the nodes should be balanced fairly evenly and provides the best performance. The services being load balanced must be stateless or subsequent connections will be made to other nodes and give unpredictable results because the session data isn’t present.
      • Single: Multiple requests from the same client must be handled by the same NLB node – “Sticky Sessions”.
        • With Single Affinity, once a client establishes a connection to a cluster node, subsequent connections will go to the same node. Because of this, client state is maintained across multiple TCP connections.
      • Network: Multiple requests from the same TCP/IP address range must access the same node- Usually used for internet facing clusters.
        • Network affinity is the same as Single affinity, but applied to a network range instead of individual client.
  • The Single Host filtering mode directs all traffic to the host with the highest priority. If that host fails, the traffic is directed to the next lowest priority host.
  • The Disable this Port Range setting will force the traffic in the range to be dropped.The Timeout setting (applicable to Multiple Host filtering) is used to protect clients from changes to the NLB settings during a session. If a client connects to the NLB while configured for Multiple Host filtering with Single affinity and a timeout of 15 minutes, a change to Multiple Host filtering with No affinity will not affect them until the timeout is reached.

    When editing an existing rule on an individual node, you get a slightly different screen:

    Editing existing rule on an individual node

    Editing existing rule on an individual node

    This screen introduces Load Weight and Handling Priority.

  • Load Weight: The default setting is Equal, but modifying this will change the distribution of the port rule traffic- you can assign a greater or lessor than 50 weight to traffic and take more or less than equal share, in that order.
  • Handling Priority: Only available in Single Host filtering. This is the order that port range traffic is sent to cluster nodes. If there is no value here, check the cluster settings for this host.
  • IMPORTANT: I’ve seen practice questions that play on the similarities of Handling Priority and Host Priority. Remember that Handling priority only applies to Single Host filtering.

 

6) Upgrade an NLB cluster

I’m going to make an assumption here, since there isn’t an “NLB Version” or something similar. The assumption is upgrading an NLB cluster that was configured on a previous Windows Server version to cluster nodes running Windows Server 2012 r2.

I think you have two options to accomplish that- a disruptive upgrade (Taking the cluster down, upgrading the hosts and then building a new cluster) and a less disruptive rolling upgrade.

  • Disruptive:
    • Take the cluster offline, and upgrade each host one-by-one to server 2012 r2. Once complete, connect the upgraded hosts to the cluster. Naturally the cluster cannot service connections during this operation.
  • Rolling Upgrade:
    • Leave the cluster online and drain each node of existing connections. A Drainstop (Right-click a node in the cluster and select Drainstop in the Control Hosts menu) will also refuse new connection so use it wisely.

      Upgrade hosts - using Drainstop

      Upgrade hosts – using Drainstop

    • Upgrade the host to Server 2012 r2
    • Click ‘Start’ on the node after the upgrade is complete.

Back to Table of Contents