So What's This All About

The purpose of this site

I started off my career in IT 25 years ago as a COBOL Programmer in South Africa and have progressed (or some may say regressed) to consulting on virtualization technologies. I created this site to share my experiences with virtualization and cloud computing, as well as the latest virtualization news, tips, tricks and tools from other experts in the field.



Online Training

XenDesktop 7 Course

Citrix Education is offering a self-paced introductory course on XenDesktop 7. If you’re new to XenDesktop 7 and want to learn more about it, this is a great opportunity to get some training for free.

Click here for the course details



Keep Tabs on Me

Social media links

RSS Feed 2.0

 
Articles

Receiver for Web FAQ

An article by Feng Huang from Citrix Blogs

A number of questions have been asked frequently about Receiver for Web. This article attempts to provide answers to them. The order of the questions is arbitrary. The answers apply to StoreFront 2.5.

How can I show the Apps tab by default for all users?

This is configurable via web.config under the Receiver for Web site, normally C:\inetpub\wwwroot\Citrix\StoreWeb. Open this file in your favourite text editor and locate the following segment:

    <uiViews showDesktopsView="true" showAppsView="true"
             defaultView="desktops" />

Change the value of defaultView to be apps:

    <uiViews showDesktopsView="true" showAppsView="true"
             defaultView="apps" />

Is there a way to add a domain drop down list to the login page as per Web Interface?

First you configure Trusted Domains via the StoreFront Administration Console.

  1. Select the Authentication node in the left pane.
  2. Then select “Configure Trusted Domains” in the right pane.
  3. Click OK.
  4. Open web.config under the Authentication Service, normally C:\inetpub\wwwroot\Citrix\Authenticationusing your favourite text editor, locate the following element:
        <explicitBL ...      hideDomainField="true" ...>
          ...
        </explicitBL>

    Change the value of hideDomainField to be false.

  5. Load the Receiver for Web in a browser. The domain drop down list appears in the Login page.

How can I subscribe all the applications for my users?

If you want to pre-subscribe applications for your users but allow them to selectively remove some applications, you can add the keyword “Auto” to your published applications.

If you don’t want your users to be able to remove the presubscribed applications, please see Provisioning All Applications to All Users in StoreFront for how to do it.

Is there a way to have a different background image for the Home screen than the one on the Logon screen?

Yes, please see Example 1 of Customizing Receiver for Web 2.5 for details.

How do I hide Activate menu under user name section?

This is configurable via web.config under the Receiver for Web site, normally C:\inetpub\wwwroot\Citrix\StoreWeb. Open this file in your favourite text editor and locate the following segment:

Read More



 

WMI Objects Used By Citrix Director for troubleshooting Sessions

An article by Vindhya Gajanan from Citrix Blogs

Overview

One of the important features of Citrix Director is Help Desk and User Details. These pages provide required Information pertaining to a particular user session and helps troubleshoot issues with the session.

Director employs Windows Management Instrumentation (WMI) on Desktop OS and Server OS VDI Machines to get some of the session and user specific details. Director fetches this information from preinstalled Windows WMI objects and Citrix specific WMI providers installed during VDA installation.

It is important to mention here that there is substantial difference between Director 7.x and earlier versions of Director.

  • Director 2.1 and earlier versions use WinRM Call to fetch data from WMI objects whereas Director 7 and later versions does not use WinRM
  • Director 7 has its own plugin deployed on Desktop OS and Server OS machines to get some of the performance metrics and session specific details thereby removing the need of configuring WinRM.

 

This blog is mainly to provide WMI Objects used by Director and give basic examples of how to get the same from PowerShell.

What This Blog Covers:

  1. List of all WMI Objects Used by Director 7 and later with examples of how to query WMI by PowerShell.
  2. List of all WMI Objects Used by Director 2.1 (and earlier) and through WinRM with Examples of how to query WMI by PowerShell.
  3. Configuring WINRM
  4. Issues that affect rendering WMI Data on Director.

 

WMI Objects Used In Citrix Director 7.0 And Later:

Here is the list of all WMI objects used by Director 7 and later versions.

Objects are categorized according to Panels it populates in Director User Details Page.

Here the Director Fields column is the field shown in Director User Detail Page in corresponding panels.

Example: RAM of the Desktop running the session is shown in Director User Details Page as Memory

How to get the same data in PowerShell:

To get WMI Data, run below command in PowerShell of the Desktop OS and Server OS Machines:

Get-wmiobject –Class “classname” –Namespace “namespace”

 

Machine Details Panel that provides machine specific details:

Director Fields

Namespace

Class

Memory

Root\cimv2

Win32_computersystem

vCpu

Root\cimv2

Win32_computersystem

Hard Disk

Root\cimv2

Win32_logicaldisk

Panel that provides user and computer specific policies applied:

Read More



 

Netscaler authentication vserv mod

An article by Morten Kallesoe from Citrix Blogs

I recently got a request from a partner that wanted to use the netscaler’s AAA functionality, with a little twist.

Overall objective: Modifying the access control cookies, to create a persistence session across browser close/open

 

Usecase description
User enters www.website.com
The netscaler takes care of the authorization in a AAA server.
The user is now logged in.
The user should be able to close the browser, wait a few minutes, open the browser to www.website.com and still be logged in.

This is not the default behavior with in the Netscaler

Default netscaler behavior.

When the user enters www.website.com, the user is redirected to the authentication site, aaa.website.com. Here the user enters username and password, and the netscaler does authentication based on the configuration. When the user is authenticated, 2 cookies will be set. And the user will be redirected back to www.website.com

The 2 cookies are important, since they verify that the user is authenticated.

Set-Cookie: NSC_TMAA=02a65617419c18edd4b6dbb6c58aeb3e;HttpOnly;Path=/;Domain=website.com
Set-Cookie: NSC_TMAS=f7fa0d2e10e3a9c4aeb5950f4837a823;Secure;HttpOnly;Path=/;Domain=website.com

The webdeveloper within us(don’t be shy!), will see that there is no “expires=” in the set-cookie string, (or MAX-AGE=) That means the cookie will be a session cookie. And session cookies will be deleted when the browser closes, which is what we don’t want.

So the task is: Make the NSC_TMAA and NSC_TMAS persistent.

The MOD

Intercept the first redirect from aaa.website.com to www.website.com and re-set the cookie, and then redirect the user back to www.website.com This is possible because the client is requesting www.website.com with the NSC_TMAA and NSC_TMAS cookies.

Sounds pretty simple! And it was, since I had my lovely netscaler to help me out.

Read More



 

Excluding virtual disks in Hyper-V Replica

An article by Aashish Ramdas [MSFT] from Windows Virtualization Team Blog

Since its introduction in Windows Server 2012, Hyper-V Replica has provided a way for users to exclude specific virtual disks from being replicated. This option is rarely exercised but can have a significant benefits when used correctly. This blog post covers the disk exclusion scenarios and the impact this has on the various operations done during the lifecycle of VM replication. This blog post has been co-authored by Priyank Gaharwar of the Hyper-V Replica test team.

Why exclude disks?

Excluding disks from replication is done because:

  1. The data churned on the excluded disk is not important or doesn’t need to be replicated    (and)
  2. Storage and network resources can be saved by not replicating this churn

Point #1 is worth elaborating on a little. What data isn't “important”? The lens used to judge the importance of replicated data is its usefulness at the time of Failover. Data that is not replicated should also not be needed at the time of failover. Lack of this data would then also not impact the Recovery Point Objective (RPO) in any material way.

There are some specific examples of data churn that can be easily identified and are great candidates for exclusion – for example, page file writes. Depending on the workload and the storage subsystem, the page file can register a significant amount churn. However, replicating this data from the primary site to the replica site would be resource intensive and yet completely worthless. Thus the replication of a VM with a single virtual disk having both the OS and the page file can be optimized by:

  1. Splitting the single virtual disk into two virtual disks – one with the OS, and one with the page file
  2. Excluding the page file disk from replication

How to exclude disks

Application impact - isolating the churn to a separate disk

The first step in using this feature is to first isolate the superfluous churn on to a separate virtual disk, similar to what is described above for page files. This is a change to the virtual machine and to the guest. Depending on how your VM is configured and what kind of disk you are adding (IDE, SCSI) you may have to power off your VM before any changes can be made.

At the end, an additional disk should surface up in the guest. Appropriate configuration changes should be done in the application to change the location of the temporary files to point to the newly added disk.

Figure 1:  Changing the location of the System Page File to another disk/volumeimage

Excluding disks in the Hyper-V Replica UI

Right-click on a VM and select “Enable Replication…”. This will bring up the wizard that walks you through the various inputs required to enable replication on the VM. The screen titled “Choose Replication VHDs” is where you deselect the virtual disks that you do not want to replicate. By default, all virtual disks will be selected for replication.

Figure 2:  Excluding the page file virtual disk from a virtual machineimage

Excluding disks using PowerShell

The Enable-VMReplication commandlet provides two optional parameters: –ExcludedVhd and –ExcludedVhdPath. These parameters should be used to exclude the virtual disks at the time of enabling replication.

PS C:\Windows\system32> Enable-VMReplication -VMName SQLSERVER -ReplicaServerName repserv01.contoso.com -AuthenticationType Kerberos -ReplicaServerPort 80 -ExcludedVhdPath 'D:\Primary-Site\Hyper-V\Virtual Hard Disks\SQL-PageFile.vhdx'

After running this command, you will be able to see the excluded disks under VM Settings > Replication > Replication VHDs.

Figure 3:  List of disks included for and excluded from replication image

Impact of disk exclusion

Enable replication A placeholder disk (for use during initial replication) is not created on the Replica VM. The excluded disk doesn’t exist on the replica in any form.
Initial replication The data from the excluded disks are not transferred to the replica site.
Delta replication The churn on any of the excluded disks is not transferred to the replica site.
Failover The failover is initiated without the disk that has been excluded. Applications that refer to the disk/volume in the guest will have their configurations incorrect.

For page files specifically, if the page file disk is not attached to the VM before VM boot up then the page file location is automatically shifted to the OS disk.
Resynchronization The excluded disk is not part of the resynchronization process.

Ensuring a successful failover

Most applications have configurable settings that make use of file system paths. In order to run correctly, the application expects these paths to be present. The key to a successful failover and an error-free application startup is to ensure that the configured paths are present where they should be. In the case of file system paths associated with the excluded disk, this means updating the Replica VM by adding a disk - along with any subfolders that need to be present for the application to work correctly.

The prerequisites for doing this correctly are:

  • The disk should be added to the Replica VM before the VM is started. This can be done at any time after initial replication completes, but is preferably done immediately after the VM has failed over.
  • The disk should be added to the Replica VM with the exact controller type, controller number, and controller location as the disk has on the primary.

There are two ways of making a virtual disk available for use at the time of failover:

  1. Copy the excluded disk manually (once) from the primary site to the replica site
  2. Create a new disk, and format it appropriately (with any folders if required)

When possible, option #2 is preferred over option #1 because of the resources saved from not having to copy the disk. The following PowerShell script can be used to green-light option #2, focusing on meeting the prerequisites to ensure that the Replica VM is exactly the same as the primary VM from a virtual disk perspective:

param (
    [string]$VMNAME,
    [string]$PRIMARYSERVER
)
 
## Get VHD details from primary, replica
$excludedDisks = Get-VMReplication -VMName $VMNAME -ComputerName $PRIMARYSERVER | select ExcludedDisks
$includedDisks = Get-VMReplication -VMName $VMNAME | select ReplicatedDisks
if( $excludedDisks -eq $null ) {
    exit
}
 
#Get location of first replica VM disk
$replicaPath = $includedDisks.ReplicatedDisks[0].Path | Split-Path -Parent
 
## Create and attach each excluded disk
foreach( $exDisk in $excludedDisks.ExcludedDisks )
{
    #Get the actual disk object
    $pDisk = Get-VHD -Path $exDisk.Path -ComputerName $PRIMARYSERVER
    $pDisk
    
    #Create a new VHD on the Replica
    $diskpath = $replicaPath + "\" + ($pDisk.Path | Split-Path -Leaf)
    $newvhd = New-VHD -Path $diskpath `
                      -SizeBytes $pDisk.Size `
                      -Dynamic `
                      -LogicalSectorSizeBytes $pDisk.LogicalSectorSize `
                      -PhysicalSectorSizeBytes $pDisk.PhysicalSectorSize `
                      -BlockSizeBytes $pDisk.BlockSize `
                      -Verbose
    if($newvhd -eq $null) 
    {
        Write-Host "It is assumed that the VHD [" ($pDisk.Path | Split-Path -Leaf) "] already exists and has been added to the Replica VM [" $VMNAME "]"
        continue;
    }
 
    #Mount and format the new new VHD
    $newvhd | Mount-VHD -PassThru -verbose `
            | Initialize-Disk -Passthru -verbose `
            | New-Partition -AssignDriveLetter -UseMaximumSize -Verbose `
            | Format-Volume -FileSystem NTFS -Confirm:$false -Force -verbose `
    
    #Unmount the disk 
    $newvhd | Dismount-VHD -Passthru -Verbose
 
    #Attach disk to Replica VM
    Add-VMHardDiskDrive -VMName $VMNAME `
                        -ControllerType $exDisk.ControllerType `
                        -ControllerNumber $exDisk.ControllerNumber `
                        -ControllerLocation $exDisk.ControllerLocation `
                        -Path $newvhd.Path `
                        -Verbose
}

The script can also be customized for use with Azure Hyper-V Recovery Manager, but we’ll save that for another post!

Capacity Planner and disk exclusion

The Capacity Planner for Hyper-V Replica allows you to forecast your resource needs. It allows you to be more precise about the replication inputs that impact the resource consumption – such as the disks that will be replicated and the disks that will not be replicated.

Figure 4:  Disks excluded for capacity planningimage

Key Takeaways

  1. Excluding virtual disks from replication can save on storage, IOPS, and network resources used during replication
  2. At the time of failover, ensure that the excluded virtual disk is attached to the Replica VM
  3. In most cases, the excluded virtual disk can be recreated on the Replica side using the PowerShell script provided
 

Optimizing HDX 3D Pro

An article by Adarsh Kesari from Citrix Blogs

Since the announcement of HDX 3D Pro and NVIDIA GRID vGPU at Synergy 2013, I have done many presentations, demonstrations and builds of this technology for customers and partners. The interest is massive, as it clearly addresses a use case that desktop virtualisation couldn’t efficiently manage previously. In this blog, I want to share five of my learnings from deploying HDX 3D Pro setups in customer environments, for both XenApp Hosted Shared Desktops and XenDesktop VDI.

 

Configure XenDesktop policies – minor changes can be the difference between perfect and pathetic

When you configure a machine with HDX 3D Pro and look through the policies available in Citrix Studio, you’ll find that there isn’t a huge number of specific HDX 3D Pro policies. The most useful policies are more generic, and can be applied to any workload in Studio. The five most important XenDesktop policies that I tweak for 3D applications are:

  • Moving Image Compression
  • Visual Quality
  • Lossy Compression Level
  • Enable Lossless
  • Progressive Compression Level

I typically use these five policies to tweak the performance of an application over any network access scenario (LAN, WAN, remote access). Policies such as lossless compression and moving image compression will assist in maximising the performance of 3D applications over the WAN. Each application and workload behave differently, so it is important to test the user experience each time you change one of these settings.

 

To set the most appropriate policy level, it is important to understand the requirements of the users. If they are WAN users, what is the available bandwidth? How many users in a site? What is the latency between site and virtual desktop? Is there any optimisation of the user experience done by a CloudBridge? Use this criteria to validate your policy settings. In my setups, I reduce image quality and increase compression on moving images when deploying apps out for remote access. In this scenario, I have no control over the client connection bandwidth, so I provide the minimal acceptable performance. Tailor how your LAN and WAN policies look, and filter them by site and user for maximum effect.

Read More



 

Demystifying KMS and Provisioning Services

An article by Mitra Ahi from Citrix Blogs

Beginning in Provisioning Services 5.6 SP1, Citrix introduced a new feature designed to facilitate Key Management Services (KMS) license activation of the Operating System and of Microsoft Office installations for images streamed in Standard (Read-only) mode. In this blog post, we’ll attempt to remove some of the mystery surrounding the implementation of the KMS license activation feature in Provisioning Services.

The following points will be discussed in more detail:

A. Why is the PVS KMS feature needed?

B. Prerequisites and Planning

C. Important tips regarding KMS and PVS images

D. Why would I run the /Ato or the /Act command?

 

A. Why is the PVS KMS feature needed?

The Microsoft KMS host machine identifies KMS clients with a unique Client Machine ID (CMID). When we deploy a single image to be used by multiple machines, the image has to be prepared so that each machine presents itself to the KMS host Server as a separate entity, otherwise, they won’t be validated and activated by the KMS host.

The image preparation is the main and the only responsibility of the Citrix PVS KMS feature, the license activation process remains a function of Microsoft. This preparation is done by going through a very specific set of steps done on both the PVS Targets and the Server which must be done in the proper order for the activation to succeed.

Note: In some cases, administrators report duplicate CMID entries for streamed Targets but the OS appears to be activated. This is a misleading situation. The fact is that successful KMS activation is always accompanied by unique CMIDs. Any machines with a duplicate CMID will eventually lose their activation.

B. Prerequisites and Planning

  • The Stream/SOAP service account is used in the image preparation process to activate KMS so it’s important to make sure that proper permissions are configured prior to beginning the process:
    • The Stream/SOAP service account has to be a domain user which is a member of the local administrator’s group on the PVS Servers in that farm.
    • For KMS based images, Network Service cannot be used for Stream/SOAP Service account
  • Before running the Rearm command, either for the first time or as a troubleshooting step, verify the Rearm count on the OS by running slmgr /dlv from a command prompt. It’s highly recommended to have more than (not equal) 1 rearm left on HDD in case further Rearms are required after the image is built. If there is only a single Rearm count left on the image, as a precaution, make a backup copy of the image (pvp, vhd and avhds) to avoid running out of rearms if KMS activation fails for any reason. Zipping the image file will reduce the size of the image significantly so take advantage of that when space is a concern.

C. Important tips regarding KMS and PVS images

As I mentioned earlier, following the specific order of steps is crucial for successful KMS activation. The Knowledge Base article: http://support.citrix.com/article/CTX128276 provides guidance and the specific steps, including their order of operation, necessary to successfully activate provisioned KMS images and to maintain the activation status of the images during updates.

Read More



 

XenApp SSPI handshake failed

An article by Christopher Laws from Riverlite

During a routine check of the event logs on an SQL server that was being used for a XenApp 6.5 database, the following errors were noticed in the event logs:

Error: Event 17806, MSSQL$CITRIX

SSPI handshake failed with error code 0x8009030c, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The Windows error code indicates the cause of the failure. The login attempt failed.

EventID2

Information: Event 18452, MSSQL$CITRIX

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication [Client: x.x.x.x]

EventID1

 

Checking the XenApp servers in question, nothing was being written to the event logs to indicate their was a problem and we could see the IMA service was started and users were able to launch both published desktops and applications without any problems. Stopping \ starting the MFCOM and IMA service was successful. MF20 file all looked fine.

Back to the SQL server and checking the activity monitor it became clear what the problem was. There was no communication being shown between the XenApp server and the SQL server with the service account that had been created for the XenApp server database.

To remedy,  on the XenApp servers that were having trouble communicating, the IMA service was stopped, the following command was run and then the IMA service was restarted.

dsmaint config /user:serviceaccount /pwd:password /dsn:”C:\Program Files (x86)\Citrix\Independent Management Architecture\mf20.dsn”

After this, the XenApp server communication could be seen in Activity monitor on the  SQL server and the Event logs stopped showing any errors.

 

Building successful clouds

An article by Priyadarshan Ketkar from Citrix Blogs

“We do these things not because they’re easy, but because they are hard…” – John F. Kennedy

I use this quote as inspiration before every cloud customer engagement. But why such a lofty premise? It’s because building large complex structures, like any enterprise or service provider cloud, requires a disciplined process for enduring success. It is this disciplined and methodical approach that determines the long-term success or failure of the cloud. I have seen this myself, and decided to understand how to build successful clouds from a holistic perspective.

Many cloud efforts start with their focus on the technology. Key criteria around the capacity, scalability, availability, security and the other technical gadgetry become paramount in the project. However, cloud projects that focus only on the technical components, are doomed before they began. What about the Business goals of the organization? What about the operational needs for managing and maintaining the structure? What about integration to existing business and technology process and systems? Citrix Worldwide Services’s architecture lifecycle process (shown below) is designed to help you architect, design and implement your unique cloud solution for optimal success.

So, what does it take to build successful clouds? Building successful clouds requires an understanding of the Business, Technology and Operational success criteria. The Business layer addresses the goals, objectives and use cases of the cloud. The Technology layer focuses on transforming physical infrastructure into resource capacity. The Operational layer looks at support, measurement, management and maintenance issues of the cloud. In the SYN230 Building successful clouds based on Citrix Consulting methodology Citrix Synergy session, I will share cloud building experiences, best practices learned from the field and explain the cloud success methodology and how it was used at Citrix customer engagements. Below, you can see how each of the three tracks overlay and interact with each other during the life of the project.

Read More



 

Adding XenMobile users to a XenDesktop Environment

An article by Christopher L Campbell from Citrix Blogs

Citrix XenDesktop has been enabling users to access applications and desktops, anywhere, anytime. However, with the growth of smartphones, not all applications and websites give the best user experience, and for that Citrix has XenMobile. If a company has 30,000 employees, 5,000 of these may be remote users utilizing XenDesktop to access applications and desktops. However, with today’s smartphone and tablet technology all 30,000 users are potential mobile users.

  • But what is the impact of adding XenMobile to an existing XenDesktop environment?
  • Where is the point of intersection between the XenDesktop and XenMobile users?
  • At the point of intersection, when does the XenDesktop user experience begin to be affected by the XenMobile users?

The Citrix Solution Lab had just finished a large scale user testing with XenDesktop 7.0 and decided to use the existing configuration to poke at these questions. All of the users were converted to remote user status, creating 5000 users accessing the environment through the NetScaler. We then leveraged a tool written by the XenMobile team to simulate mobile users accessing a XenMobile AppController. It didn’t take rocket science to realize the intersection is at the NetScaler. As a base line, a 5000 user XenDesktop test was run to get an ICA RTT (ICA Round Trip Time), then Mobility users were added until the ICA times changed and CPU utilization showed the NetScaler being overloaded. The test was re-run with 4000 and 3000 XenDesktop users to see if a trend could be found. The short version, in our test environment with the hardware and configuration we used, we saw about a 3:1 average between XenMobile users and XenDesktop users, or for each 1000 XenDesktop users we stepped down, an additional 3000 XenMobile users could be supported in the NetScaler.

Read More



 

The New XenApp – Reducing IOPS to 1

An article by Daniel Feller from Citrix Blogs

As we all know, IOPS are the bane of any application and virtualization project. If you don’t have enough, users will suffer. If you have too many, you probably spent too much money and your business will suffer. So we are always trying to find ways to more accurately estimate IOPS requirements as well as finding ways to reduce the overall number.

About 5 months ago, I blogged about IOPS for Application Delivery in XenDesktop 7.0. In the blog, I explained that for the XenApp workload, Machine Creation Services, when used in conjunction with Windows Server 2012 Hyper-V, required a significantly fewer number of IOPS than Provisioning Services. With the release of the 7.5 edition of XenApp and XenDesktop, I wanted to see what the latest numbers were on Windows Server 2012R2 Hyper-V while using the same LoginVSI medium workload. In addition, after reading Miguel Contreras’s blog on “Turbo Charging IOPS“, I wanted to see what impact the Provisioning Services “Ram Cache with Overflow to Disk” option would have on the results.

If you aren’t familiar with this caching option, it is fairly new and recently improved and I suggest you read Miguel’s blog “Turbo Charging IOPS“, to learn more. But essentially, we use RAM for the PVS write cache that will move portions to disk as RAM gets consumed, thus overcoming stability issues you would have with just a RAM Cache only option as the cache filled up. For this test, we allocated 2GB per XenApp VM. We only have 7 XenApp VMs on the host, requiring 14GB total.

The results are impressive (at least to me they are).
042914_1620_1.png

Read More