Month: September 2014

Microsoft 70-412: Objective 1.4 – Manage Virtual Machine Movement

Hooray, the last post in section 1! I hope this series is helping you study as much as it is for me!
This post deals with Objective 1.4, which handles some common Virtual machine operations:

Table of Contents:
Perform live migration
Perform quick migration
Perform storage migration
Import, Export, and Copy VMs
Configure VM network health protection
Configure drain on shutdown

(more…)

Advertisements

MICROSOFT 70-412: OBJECTIVE 1.3 Manage Failover Clustering Roles

MCSA 70-412: 1.3 Configure Failover Clustering

It’s Friday, and there isn’t a terribly large amount of content for this portion of Failover Clusters. This post may be the smallest of the series, but time will tell!

 

Table of Contents:
1) Configure role-specific settings, including continuously available shares
2) Configure virtual machine (VM) monitoring
3) Configure failover and preference settings
4) Configure guest clustering

(more…)

Microsoft 70-412: Objective 1.2 Configure Failover Clustering

MCSA 70-412: 1.2 Configure Failover Clustering

A failover cluster is a group of independent servers that run a highly available service or application (called a clustered role). If one or more nodes fail, the other nodes begin to provide the services in their place. there is service reliability as well: if a cluster role becomes unresponsive for any reason, it can be restarted or brought up on another node.

Unlike the Network Load Balancer feature, a Windows Failover Cluster is designed to provide true high availability to mission critical applications. There are important differences between NLB clusters and failover clusters; where nodes in an NLB are all running the same application and load balancing between them, a Windows failover cluster has only one server running the role with the remaining cluster members waiting to take over if needed.

Additionally, failover cluster introduce shared storage amongst the cluster nodes- this is ideal for application and data consistency. Although not limited to these roles, you will traditionally find Windows failover clusters protecting database server, mail servers and file servers.

Looking over the Exam objectives, I’m somewhat surprised that the exam (allegedly) doesn’t include the initial set up of a Failover Cluster. I’m including a full walkthrough as an addendum.

Table of Contents

1. Configure quorum
2. Configure cluster networking
3. Configure cluster storage
4. Configure storage spaces
5. Configure and optimize clustered shared volumes
6. Implement Cluster-Aware Updating
7. Configure clusters without network names
8. Upgrade a cluster
9. Restore single node or cluster configuration

Addendum: Full installation
Addendum: Powershell cmdlets for Failover Clusters

(more…)

Microsoft 70-412: Objective 1.1 Configure Network Load Balancing

Configure Network Load Balancing (NLB)

Network Load Balancing (NLB) is a HA feature that allows a group of servers appear as one server to external clients. The server group bound through NLB is usually referred to as an NLB cluster or server farm, and each individual server in the cluster is called a host or node. Network Load Balancing improves both the availability and scalability of a service that runs on all the individual nodes.

NLB improves availability by absorbing individual server failures- NLB detects unresponsive, disconnected or dead servers and sends new client requests to the remaining functional hosts. NLB supports scalability because a group of servers in aggregate will be able to handle more traffic than any one server can. As the demand for a service such as IIS grows, more nodes can be added to accommodate the increased workload.

Important Note: Each client is sent to an individual node in the cluster upon connection. This means that NLB clusters don’t aggregate resources together, they just facilitate the initial client connection to a server. A different clustering technology should be used for stateful applications such as database servers because data updates and changes would result in a different experience if the client next connects to a different node.

I’m changing the order of items in the blueprint for a more logical learning flow.

Other Media:

 

TABLE OF CONTENTS

1.1 Configure NLB prerequisites
1.2 Install NLB nodes
1.3 Create new NLB Cluster
1.4 Configure cluster operation mode
1.5 Configure Affinity
1.6 Upgrade an NLB Cluster

 

1) Configure NLB prerequisites

    • At least one network adapter for load balancing. Preferably 2 to separate NLB and normal network traffic.
    • Static IP addresses.
    • Only TCP/IP used on the adapter for which NLB is enabled. Do not add any other protocols (for example, IPX) to the NLB adapter.
    • All hosts in the NLB cluster must reside on the same subnet.

Back to Table of Contents

 

2) Install NLB Nodes

    • Open Server Manager and go to “Add Roles and Features”

      Open Server Manager and select Add Roles and Features Microsoft 70-412 Exam Walkthrough Certification

      Open Server Manager and select Add Roles and Features

    • In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation”

      In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation"

      In the Add Roles and Features Wizard select “Roll-based or Feature-Based Installation”

    • On the Server Selection screen, select the local server:

      On the Server Selection screen, select the local server:

      On the Server Selection screen, select the local server

    • Skip over Server Roles since NLB is a Windows Feature. On the Feature Selection page, check the box for Network Load Balancer:

      On the Feature Selection page, check the box for Network Load Balancer

      On the Feature Selection page, check the box for Network Load Balancer

    • Click “Add Features” on the screen that follows

      Click “Add Features” on this screen

      Click “Add Features” on this screen

    • Repeat for all of the nodes that will participate in the NLB cluster
    • You can also install via Elevated PowerShell: PS> Install-WindowsFeature NLB -IncludeManagementTools

Back to Table of Contents

 

3) Create new NLB Cluster

  • Now that the NLB feature is installed, launch the Network Load Balancing Manager.

    Open NLM Manager

    Open NLM Manager

  • Right-click the item ‘Network Load Balancing Clusters’ and select ‘New Cluster’:

    Create new NLB Cluster

    Create new NLB Cluster

  • The next screen asks for the name or IP of the first node in your new NLB cluster. In this example, I’ve entered localhost. Select the correct interface for NLB traffic and select ‘Next’:
    New Cluster Screen 1

    New Cluster Screen 1

    PRODUCTION TIP: I doubt this will be on the test, but best practices would have at least 2 interfaces for each cluster node: One for NLB traffic, and the other for standard network traffic. Also be sure that you’ve met all the NLB Prerequisites.

  • The next screen shows host parameters:
    Manage node parameters

    Manage node parameters

    There are three important configuration options on this page that I’ll walk you through:

    Priority (unique host identifier) drop down: A value from 1-32 that is host-unique. The priority setting essentially determines the order of hosts to handle non-load balanced network traffic- If the host with priority 1 is unavailable, then the host with the next numeric value handles that kind of traffic.

    Dedicated IP Address: You can modify the hosts IP address from this screen- in practice, I’ve only done this if the interface used for NLB traffic has more than one IP address assigned to it. Keep in mind that this is the IP address for the HOST, not the cluster IP address.

    Initial Host State: Started, Suspended, or Stopped. This setting determines the NLB status for this node. Default is started.

  • The next screen is the beginning of cluster configuration options. On this page, click ‘Add’ to configure a cluster IP address. The address you close will be the Virtual IP address that will be used to connect to the whole NLB cluster. The IP address must be on the same logical subnet as the host IP address(es) chosen on the previous wizard page.

    Click the Add button and configure the NLB VIP.

    Click the Add button and configure the NLB VIP.

  • The next screen configures the cluster IP/DNS settings as well as the cluster mode. It’s also the next sub-objective:

Back to Table of Contents

 

4) Configure cluster operation mode

  • Now that the NLB feature is installed, launch the Network Load Balancing Manager.Configure the cluster IP/DNS settings as well as the cluster mode.

Cluster IP Configuration: Relatively straight forward section here: Verify that the listed IP address is the virtual IP that you want used for the NLB, and add a FQDN (Fully-qualified domain name) for the cluster. Register the FQDN with the DNS server of your choosing, but that’s outside of the scope of this walkthrough.

  • Cluster operation mode: Here, you set the operation of the NLB with radio buttons.
    • Unicast: This is the default. The NLB clusters’ virtual MAC address will replace the MAC address on each individual hosts NLB NIC. Some fanciness happens to all of the outgoing network packets to prevent upstream switches from discovering that all of the cluster nodes functionally have the same MAC address. Unicast mode requires a second NIC for communication between cluster nodes.
      • In practice, Unicast mode has a few disadvantages:
        • Requires 2 NICs- one for NLB traffic and one for peer communications.
        • Incoming NLB packets are sent to all the ports on the switch, possible causing switch flooding.
        • Due to switch flooding, VMware and other hypervisors recommend Multicast mode. Single VM Migration is not supported.
    • Multicast: Each host keeps individual hardware MAC addresses- the Cluster MAC address is assigned to all adapters and used as a multicast address with each host translating into the local NIC MAC.  Local communication is not affected because each host retains a unique hardware address.
      • Multicast has a few disadvantages as well:
        • Upstream routers will require a static ARP entry. This is because this cluster mode resolves a unicast IP with a multicast MAC address.
        • Without IGMP, switches may need additional configuration for sending multicast traffic to the appropriate switch ports
        • Some older switches and routers do not support mapping unicast IP to multicast MAC. In these situations the hardware will need to be replaced to use Multicast NLB.
    • IGMP Multicast – Similar to standard multicast, only allowing compatible switches to examine the contents of the multicast packets in a method to control switch flooding called IGMP Snooping.
      • Everything has trade offs, IGMP Multicast is no different:
        • Requires more complicated upstream switch configuration and enabling of multicasting routing.

Back to Table of Contents

5) Configure affinity

  • Configure port rules and affinity

Port rules define what traffic will be handled by the NLB, and how it will be load balanced. Port rule definitions match incoming traffic by a range of destination TCP or UDP ports and possibly a destination IP address. Only one rule can be applied to incoming traffic, so creating a rule conflict isn’t possible.

The default shown above is usually okay for production- basically it load balances all traffic…. but I imagine the exam will require more granular controls. Lets create a new port rule:

  • Configuring port rules and cluster affinity settings

Lets go over the options:

  • Cluster IP address: By default new port rules match all of the NLB Cluster’s IP addresses, but if your cluster has multiple IP addresses assigned, you can limit a rule to a specific IP address here.
  • Port Range and Protocols: Pretty self-explanatory. To have a rule for a specific port number, set the From and To to the same number (For example, From: 443 To: 443). Select TCP, UDP, or Both to handle the protocol used for communication. The ranges you define cannot overlap existing rules.
  • Filtering Mode: This portion defines how incoming traffic is divided up for cluster nodes. Why is it called Filtering instead of something more explanatory? Good question I say.
    The Multiple Host filtering mode is the default and has additional Affinity requirements and optional Timeout settings.

    • Affinity settings: Affinity affects how client interaction with the NLB cluster is handled, specifically around session state
      • None: Multiple requests from the same client can access any of the NLB nodes
        • With no affinity, the nodes should be balanced fairly evenly and provides the best performance. The services being load balanced must be stateless or subsequent connections will be made to other nodes and give unpredictable results because the session data isn’t present.
      • Single: Multiple requests from the same client must be handled by the same NLB node – “Sticky Sessions”.
        • With Single Affinity, once a client establishes a connection to a cluster node, subsequent connections will go to the same node. Because of this, client state is maintained across multiple TCP connections.
      • Network: Multiple requests from the same TCP/IP address range must access the same node- Usually used for internet facing clusters.
        • Network affinity is the same as Single affinity, but applied to a network range instead of individual client.
  • The Single Host filtering mode directs all traffic to the host with the highest priority. If that host fails, the traffic is directed to the next lowest priority host.
  • The Disable this Port Range setting will force the traffic in the range to be dropped.The Timeout setting (applicable to Multiple Host filtering) is used to protect clients from changes to the NLB settings during a session. If a client connects to the NLB while configured for Multiple Host filtering with Single affinity and a timeout of 15 minutes, a change to Multiple Host filtering with No affinity will not affect them until the timeout is reached.

    When editing an existing rule on an individual node, you get a slightly different screen:

    Editing existing rule on an individual node

    Editing existing rule on an individual node

    This screen introduces Load Weight and Handling Priority.

  • Load Weight: The default setting is Equal, but modifying this will change the distribution of the port rule traffic- you can assign a greater or lessor than 50 weight to traffic and take more or less than equal share, in that order.
  • Handling Priority: Only available in Single Host filtering. This is the order that port range traffic is sent to cluster nodes. If there is no value here, check the cluster settings for this host.
  • IMPORTANT: I’ve seen practice questions that play on the similarities of Handling Priority and Host Priority. Remember that Handling priority only applies to Single Host filtering.

 

6) Upgrade an NLB cluster

I’m going to make an assumption here, since there isn’t an “NLB Version” or something similar. The assumption is upgrading an NLB cluster that was configured on a previous Windows Server version to cluster nodes running Windows Server 2012 r2.

I think you have two options to accomplish that- a disruptive upgrade (Taking the cluster down, upgrading the hosts and then building a new cluster) and a less disruptive rolling upgrade.

  • Disruptive:
    • Take the cluster offline, and upgrade each host one-by-one to server 2012 r2. Once complete, connect the upgraded hosts to the cluster. Naturally the cluster cannot service connections during this operation.
  • Rolling Upgrade:
    • Leave the cluster online and drain each node of existing connections. A Drainstop (Right-click a node in the cluster and select Drainstop in the Control Hosts menu) will also refuse new connection so use it wisely.

      Upgrade hosts - using Drainstop

      Upgrade hosts – using Drainstop

    • Upgrade the host to Server 2012 r2
    • Click ‘Start’ on the node after the upgrade is complete.

Back to Table of Contents

Microsoft 70-412

Hey All!

This is the initial post for my Certification Guide series on the Microsoft 70-412 exam.
I’ve already completed the 70-410 and 70-411, but I may need to go back and post an exam guide on those exams to complete the series.

The 70-410, 70-411, and 70-412 exams make up the MCSA: Server Infrastructure certification.

I have no current plans to complete the MCSE (Adding the 70-413 and 70-414) at this time due to other certifications I have scheduled.

The blueprint identifies the below categories and weights:
Configure and manage high availability (15–20%)

Configure file and storage solutions (15–20%)

Implement business continuity and disaster recovery (15–20%)
Configure Network Services (15–20%)
Configure the Active Directory infrastructure (15–20%)
Configure Identity and Access Solutions (15–20%)

Those categories will turn into blue links as soon as I write the category guide.

Edit VM Hardware V10 VMs

Hey all!

There’s a an updated version of vSphere out, vSphere 5.5 update 2.

  • Hosts can have 6TB of RAM installed
  • Microsoft SQL 2012 sp1 and 2014 support
  • Drops web client support for Windows XP and Vista
  • Drops IBM DB2 as a supported for the vCenter database
  • Other improvements and bug fixes

But also improved and not necessarily trumpeted in the announcement? A new and improved C# Client, complete with 100+ bug fixes and the ability to edit Virtual Hardware Version 10 machines!

I just tried it out on a VM in my lab, and I get this dialog:
EDITHW10

This is a huge step forward for the C# Client! Even if you can’t use it to configure advanced HW10 features, it can be used to increase number of vCPUs, Memory, disk space…. all of the important stuff.

The rumor mill can also be heard saying that the C# client will gain full functionality in the coming year.

Important note: I didn’t need to upgrade my lab to the latest upgrade to edit HW version 10- I was able to just use the new C# client available on the installation media.

A Guide to Migrating VMware 5.1 Databases from SQL Express to SQL

Hey All!

I had a somewhat messier database migration at my most recent site, and it made me do a bunch of research that would make sense to share here. Most of this information came from KBs or scattered across the Internet… so welcome to your one-stop-shop for how to migrate VMware databases, and what I had to do when things went wrong.

I had to migrate a pair of environments today. One of them was a View install that has more moving parts, so I’ll illustrate that here.

 

Getting Started / SQL Pre-Reqs

I’m not going to fill up this blog post with how to set up SQLBest practices. I assume that you know that already. I included links just in case. What I WILL include are things that are needed after SQL is set up:

  1. Open port TCP 1433 on any Firewall program running on the machine.
  2. Set ‘Maximum Server Memory‘ (SQL Memory Max) to something sane for your environment.
  3. Open SQL Configuration Manager and expand SQL Network Configuration. Make sure that TCP/IP is enabled. Disable Dynamic Ports.
  4. Good. Now create a new SQL user account– I used VMwareUser.

 

View Composer

I started with the View Composer database because it didn’t have a strong dependency on the other components- The other good reason is that VDI administrators tend to be different than your vCenter administrators and may have different availability windows.

  1. Go through each pool and disable any refit operations that occur on logoff.
    1. This is mostly a safety thing. I want to be sure that there are no desktop operations running until I say “
  2. Disable the View Composer service
  3. Create a backup of the View Composer database using a File Backup in SQL Express.
  4. Copy the database backup to network storage or a local drive on the new SQL server.
  5. Create a new shell database on the shiny SQL server. Call it something nice- this is name will only be seen by you or the DBA team.
  6. Right click the new database, go to ‘Tasks’ and select ‘Restore Database‘. Select the backup file, and on the Options tab select ‘OVERWRITE’.
  7. Make VMware User dbo of the restored Composer database.
  8. Back on the server running View Composer, edit the Composer DSN. This is a 64-Bit DSN, so Administrative Tools > Data Sources (ODBC).
  9. Modify the SVIWebConfig. Sorry :-/
  10. Start up View Composer. If it starts without error, re-enable refit operations on the pool.

 

Update Manager

Theres a strong argument to start Update Manager fresh instead of migrating old information – in my case, the customer wanted to maintain some custom baselines… so migrate away!

  1. Disable the VMware Update Manager service.
  2. Create a backup of the VIM_UMDB database using a File Backup in SQL Express.
  3. Copy the database backup to network storage or a local drive on the new SQL server.
  4. Create a new shell database on the shiny SQL server. Call it something nice- this is name will only be seen by you or the DBA team.
  5. Right click the new database, go to ‘Tasks’ and select ‘Restore Database‘. Select the backup file, and on the Options tab select ‘OVERWRITE’.
  6. Make VMware User dbo of the restored VUM database.
  7. Back on the server running Update Manager, edit the Update Manager DSN. This is a 32-Bit DSN, so c:\Windows\SysWOW64\odbcad32.exe
  8. Edit the vci-integrity.xml file to reflect the new database information. I’m really sorry about this.
  9. Reconfigure VUM using the VMware Update Manager Configuration Utility 
    1. Modify the Database settings
    2. Re-Register with vCenter.

 

vCenter Database

Here’s the big one. Make sure you have a window of time for this one to be down that extends for both vCenter and SSO – SSO shouldn’t cause an outage, but if there are any configuration issues vCenter won’t be able to start… better to be safe and have a longer outage window than required. While vCenter is offline, no administrators will be able to get in and run the environment, no power operations will occur for VDI desktops, and DRS won’t work (Among other things)

  1. Disable the VMware vCenter  service.
  2. Create a backup of the VIM_VCDB database using a File Backup in SQL Express.
  3. Copy the database backup to network storage or a local drive on the new SQL server.
  4. Create a new shell database on the shiny SQL server. Call it something nice- this is name will only be seen by you or the DBA team.
  5. Right click the new database, go to ‘Tasks’ and select ‘Restore Database‘. Select the backup file, and on the Options tab select ‘OVERWRITE’.
  6. Make VMware User dbo of the restored vCenter database.
  7. Open the Registry Editor. I’m really sorry about this too.
    1. Navigate to HKEY_LOCAL_MACHINE > SOFTWARE > VMware, Inc > VMware VirtualCenter.
      1. Ensure that HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VirtualCenter\DB\1 contains the correct DSN.
      2. Edit HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VirtualCenter\DB\1 to the SQL username ‘VMwareUser’
      3. Ensure that HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VirtualCenter\DB\4 has the right SQL driver.
      4. Edit HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VirtualCenter/DbInstanceName and clear it (Don’t delete though!)
      5. Edit HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VirtualCenter/DbServerType and change the Value to Custom.
      6. Open an Administrative Command Prompt. CD to “C:\Program Files\VMware\Infrastructure\VirtualCenter Server” and run the command vpxd.exe -p
        1. Enter password information when requested.
  8. Recreate the SQL Rollup Jobs.
  9. Open another configuration file in notepad: C:\ProgramData\VMware\VMware VirtualCenter\vcdb.properties
    1. Put a hash mark (#) to comment out everything in this file EXCEPT  usevcdb=true
      1. NOTE: The file could be modified to contain correct information, but the above method seems to work fine as well. To each their own.
  10. In the same directory, open dabase_name.properties in notepad. Verify that the Tomcat information is correct.
  11. Attempt to start the vCenter Service.

 

Single Sign On

Reinstall Single Sign On. Just kidding, although migrating this component has made many an engineer pull their hair out. I had my own issues during this migration and the vast number of suggestions I received went along the lines of “It’s better to reinstall vSphere if SSO is having any issues”. I powered through, and now you can too!

  1. Backed up the SSO configuration using the “Generate vCenter Single Sign-On backup bundle” link in the Start -> Programs menu from the SSO server.
  2. Disable the vCenter Single Sign-On  service.
  3. Create a backup of the RSA database using a File Backup in SQL Express.
  4. Copy the database backup to network storage or a local drive on the new SQL server.
  5. Create a new shell database on the shiny SQL server. Call it something nice- this is name will only be seen by you or the DBA team.
  6. Right click the new database, go to ‘Tasks’ and select ‘Restore Database‘. Select the backup file, and on the Options tab select ‘OVERWRITE’.
  7. Create new users (Or verify that the users migrated during the restore process) RSA_USER and RSA_DBA
  8. Check that the RSA_User that was migrated doesn’t have any mappings using this query against the restored database: sp_change_users_login report
  9. Create a new SQL User named RSA_USER at the SQL Server level. Give it the same password as RSA_USER had on the original SQL Express installation.  Set the default database to the newly restored SSO database.
  10. Run this query against the SSO database to re-map the RSA_USER account: sp_change_users_login ‘update_one’, ‘RSA_USER’, ‘RSA_USER’
  11. Recreate the RSA_DBA SQL user account and give it DBO over the SSO database.
  12. On the SSO Server:
    1. Navigate to the ssocli command – In my case, it was C:\Program Files\VMware\Infrastructure\SSOServer\Utils. Run the following command: ssocli configure-riat -a configure-db –database-host new_host_name
      1. Enter the SSO Master password that was used when SSO was initially set up.
    2. Go up a directory and open up the ..\SSOServer\webapps\ims\WEB-INF\classes\jindi.properties file in Notepad.
      1. Modify com.rsa.db.hostname to the hostname of the new SQL server
      2. Change the com.rsa.instanceName to the SQL Database Name here (instanceName seems inappropriate)
    3. Navigate to C:\Program Files\VMware\Infrastructure\SSOServer\webapps\lookupservice\WEB-INF\classes\config.properties
      1. Change the dburl= line to the information for the new server.
  13. Start SSO and hope for the best.

 

Cleanup

Some cleanup items at this point:

  1. Go into the registry and break the dependencies on SQL Express for vpxd.
  2. Restart all VMware services twice to ensure proper operation
  3. Make sure that the Web Client works correctly and that performance graphs load as expected
  4. Restart the server to make sure all comes back up.
  5. ???
  6. Profit!

Differences between VM Hardware Versions

Hey all!
I was asked by a client for a comparison chart, so I thought I’d put it up here as well:

Virtual Hardware Version

Products

Changes

10

ESXi 5.5
Fusion 6.x
Workstation 10.x
Player 6.x

  • vSATA Disk Controller increases devices per channel
  • vGPU support for non-Intel VMs
  • Larger vDisk (62TB) maximum size

9

ESXi 5.1
Fusion 5.x
Workstation 9.x
Player 5.x

  • Support for Intel VT-x/EPT and AMD-V/RVI
  • Ability to reclaim space from thinly provisioned disks
  • vGPU Graphics for Intel GPUs
  • vCPU performance counters

8

ESXi 5.0
Fusion 4.x
Workstation 8.x
Player 4.x

  • 32-way SMP
  • 1TB RAM in a single VM
  • Basic software 3D support
  • USB 3.0
  • UEFI Bios

7

ESXi/ESX 4.x
Fusion 3.x
Fusion 2.x
Workstation 7.x
Workstation 6.5.x
Player 3.x
Server 2.x

  • VMXNet3
  • 8-way SMP
  • 256G RAM in a single VM
  • Enhanced vMotion Compatibility (EVC)
  • Hot Plug support for devices

Note that VM Hardware version 10 VMs are controlled predominately through the vSphere Web Client.

Cant start View Composer service after migrating Composer database

Hey all!

This is another story fresh from the field- luckily one with a happy ending.

I was engaged on a fairly quick project to migrate the internal VMware and Horizon View databases from a default SQL Express instance to a new SQL server that the client built and configured. This is something that I’ve done many times in the past, and has routinely gone to plan.

It’s important to remember that every every environment is unique. This particular environment required an additional hour of time to get View Composer back up and churning out ReFit operations! As in most things, but particularly advanced IT work…. Prior results don’t guarantee a repeatable checklist!

The process we took for the database migration was as follows:

  1. Create a backup file on SQL Express.
  2. Back up the View Composer database in SQL Express.
  3. Rename the backup file and transfer to network storage/C$ drive of the new SQL server.
  4. Create a shell database in SQL on the destination machine.
  5. Restore the Composer backup OVER the new SQL database.
  6. Create a new SQL account and make it owner of Composer database.
  7. Repoint the Composer DSN on the machine running View Composer using SQL account credentials.

Seems pretty simple, and has worked many times in the past. This time as I said, things didn’t go to plan.

After the migration, going into Services and trying to restart the View Composer service gave an error. In the logs for Composer, I saw that it was trying to connect with the old DSN name and blank credentials.

I checked the DSN, and retested the connection – Correct credentials, and the test was successful as expected. Where is Composer getting this info?

Turns out, I needed to change another thing in this environment.

EDIT SVIWEBCONFIG:
I needed to follow this KB to edit the SVIWebConfig, substituting sane values for our environment:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1022526

After performing the above, Composer started and all was well with the world.

Then I had to migrate the vCenter databases and ran into another weird DSN problem… which will be the subject of another Blog post!