SharePoint Experts, Information Architects, Expert Witness

SICG provides a broad array of business and technology consulting from architecture to design to deployment of global systems with a focus on surfacing data in the enterprise. We focus on the "How", not just the possible. Contact me direct: or call 704-873-8846 x704.

Search This Blog

Wednesday, August 26, 2015

Creating a Failover Cluster in VMWare Workstation 11

How To Create A Failover Cluster using VMWare Workstation

So....I'd seen a number of posts on this and while I had tried it in the past, I always seemed to have issues when trying to create a Cluster using Workstation (ESX, etc. different story). The steps were always a bit vauge on one item or another and issues like the 2nd machine not starting, inability to validate the cluster, etc.

Here's the How To that works:

1) Power down both servers

Stupid suggestion right? Of course you should start with the servers off and anyway, most of the settings here would cause you to have to reset the server so....

2) Backup the VMX Files

Navigate to the folder where the first server is and find the <servername>.vmx file. Right click on the VMX file and send it to a zip file in case you need to restore it! Repeat for the second server.

Don't do this to your own peril.

3) Add a second NIC to each server

Likely when you created the server(s), there was only a single NIC adapter added and the cluster needs two (one for the network, one to communicate between the cluster servers). The first NIC created should be set to Bridged. 

From the VMWare Console, click VM in the menu then select Settings... to open up the Virtual Machine Settings. Click the Add... button to open the Add Hardware Wizard and select Network Adapter. Click Next >, select Host-only, leave other settings as is and click Finish.

When done, it should look something like this:

Note: If you don't do the Host-only setting, it won't work.

4) Create the Shared Drives

A cluster needs at minimum a Quorum disk (used to sync between the servers) and a data disk (you can add any number of disks as needed).

Create a new folder wherever you keep your VM's stored and call it ClusterDrives (or whatever name you want). On the 'first' server (whichever you pick as the starting point), add the Quorum disk (you'll repeat this process for each disk you add):

a) Open the VM Settings as above (when you added NIC's). 

b) Click the Add... button to open the Add Hardware Wizard

c) Click Hard Disk to select it and click Next > to display the Select a Disk Type page

d) Leave the type selected as SCSI then select Independent and Persistent as shown (we don't want them to be tempermental now do we?):

e) Click Next > to display the Select a Disk page:

f) You can choose the type of disk here - but for most cases, leave this as Create a new virtual disk (you'll have to do different on server 2) then click Next > 
g) On the Specify Disk Capacity page, select the disk size - for the Quorom disk (ONLY!), 512 is fine - click to select Allocate all disk space now and click to select Store virtual disk as a single file as shown:

Adding additional data disks for stuff like SQL, you obviously will use a more realistic size - like 100GB. However, be aware that I have had trouble trying to use the Multiple Files option for any size (to be expected - usually it would be SAN storage that is dedicated if you are seriously setting up for a production environment). 

h) Click Next > to open the Specify Disk File page:

This is where you control where the disk will be created - click the Browse... button and browse to the folder you created. 

Type in the name of the disk, for example QuorumDisk.vmdk:


* Someone apparently thought that the name extension to the right was a 'suggestion'?

i) Click the Open button - when you return to the Specify Disk File page, the path should be shown instead of just the file name. 

j) Click the Finish button to create the disk.

Now repeat the above to create a Data disk. When you do that, the size of the disk should be 1GB at least though you can make it as large as you have capacity for. Be sure to name the disk file correctly (i.e. DataDisk.vmdk). You can add additional disks as well if you setting this up for SQL Server (i.e. Data Disk, Log Disk, etc.).

When you are done, the drives created will appear in the Virtual Machine Settings page:

Tip: Make SURE they show (Persistent) - if not, delete and create again.

Next, it is necessary to set the SCSI Controller for the disk(s) - fortunately, this is a LOT easier in Workstation 11 - click on one of the new hard disks created then in the properties panel on the right, click the Advanced... button:

On the Hard Disk Advanced Settings page, set the SCSI of the disk to use Controller 1 instead of 0 and pick which Disk number to use - in this case, SCSI 1:0 (Controller 1, disk 0):

Note: this is cool they made this available - in the past, it was all in editing (see below) - this ensures you will be able to designate the disks to a different controller - no fuss, no muss.

Repeat this for all the disks added - be sure to keep track of which disk is on which disk channel - they MUST match on the second server when you add the disks!

5) Add the Disks to the Second Server

Using the VM Settings for the second server, repeat the process to add disks - this time however, you will NOT create disks, you will simply select an existing disk:

After you have added the disks, select each and use the Advanced settings to change the SCSI Controller and disk numbers. Be SURE to match the first server.

6) Modifying the VMX Files

Next, it is necessary to update the VM server configuration file for each server. Navigate to the folder where the first server is and find the <servername>.vmx file. Create a second backup of the vmx file before you edit (you are on your own if you don't).

Right click and open this file with notepad.

Search for the disk settings in this file by searching for SCSI1 - this should bring you to the section where the cluster drives are defined (you can search for the file path or disk file name too). This should look similar to this:

scsi1.present = "TRUE"
scsi1.virtualDev = "lsisas1068"
scsi1:0.present = "TRUE"
scsi1:0.fileName = "H:\ClusterDrives\QuoromDisk.vmdk"
scsi1:0.mode = "independent-persistent"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "H:\ClusterDrives\DataDisk.vmdk"
scsi1:1.mode = "independent-persistent"

The lines above are the only ones you need to check (you'll find a few others) - mainly to make sure that the 'present = "TRUE"' is, uh, present.

Yours MAY look a litte different, for example:

scsi1.virtualDev = "lsisas1068"

Might be:

scsi1.virtualDev = "lsilogic"

Just below these lines, add the following:

disk.locking = "false"
diskLib.dataCacheMaxSize = "0"

Without these last two lines, starting the VM will cause it to lock the drives and block the other server from accessing them. See the troubleshooting at the end.

Save the changes and exit Notepad. 

Now, Rinse & Repeat: Make the exact same changes to the second server vmx file. Cut and paste if you can (I do not know why but this made a difference - twice!). 

6) Format the Shared Disks on the First Server

Power on the first server where the disks were added. Login using an Administrator account then using Administrative Tools > Computer Management  > Disk Management, bring the disks "Online" then format them - assign the Drive Letter accordingly (i.e. Quorum disk = Q:).

* I can't explain the Disk Management console here so I am assuming you know it.

7) Add the Shared Disks on the Second Server

Power on the second server. Login using an Administrator account then using Administrative Tools > Computer Management  > Disk Management, bring the disks "Online". When you do this, they should pop right up with the proper names you assigned on the first server HOWEVER the drive letters will be different. Right click on each drive and change the drive letter to match the first server.

And at this point you are done - you now have a shared network with shared disks ready to install a Failover Cluster!


Second VM will not start?

1) Check the vmx file and make sure lines:

disk.locking = "false"
diskLib.dataCacheMaxSize = "0"

were added.

2) If set, try changing the line:

scsi1.virtualDev = "lsilogic"


scsi1.virtualDev = "lsisas1068"

Second VM can't see the drives?
Shutdown, restore the backup vmx file for the second server and repeat the process.

How to back out?

Shutdown both servers (force power off if necessary), restore the backup vmx file for each server and power on. This will NOT delete the virtual drives.

Hey folks - we do this for a living - leave comments, subscribe and never hesitate to send a link around!

Tuesday, August 25, 2015

SharePoint 2013 Workflow Visio Visualization Error

After setting up SharePoint 2013, you may come across an error when trying to access Workflow Status that renders in Visio.

However, you might get a failure that will look something like this:

Love that kind of error "The server failed to process the request" really tells it all eh?

So, there's a few things that can cause this:

  • First verify that the Visio Service (Services on Server) is started
  • Next verify the Visio Service Application and Proxy was created (Manage Service Applications)
  • Next verify that the Visio proxy is connected to the Application (Manage Web Applications)
  • Last (and most common), check the ULS logs - very often this problem is simply a logon issue:

To correct the latter, simply grant the account access to the Content Database in question.

SharePoint 2013 Workflow Fails after applying post SP1 & CU's

Like many have found, there is a bug in the patches applied in SP1 and I believe one of the subsequent Cumulative Updates - the issue appears when a Workflow tries to start. Right out of the gate you get the nasty error message:

For search purposes:

Method 'StartWorkflowOnListItem' in type 'Microsoft.SharePoint.WorkflowServices.FabricWorkflowInstanceProvider' from assembly 'Microsoft.SharePoint.WorkflowServices, Version=, Culture=neutral, PublickeyToken=71e9bce111e9429c' does not have an implementation.

You will also see Event ID 1000/1025 errors complaining about the Service Bus in the Event Application Log.

Fortuately the fix is KB2880963 - you can download this from here:

You can apply this patch and will not require a reboot or run of the Configuration (i.e. PSConfig).