Rig for Silent Running, and some industry stuff

Hey all!

I apologize for the lack of post frequency at the moment – I’m in hard study mode for two certifications concurrently:

MCSA (So the 70-412 exam) and the Appsense Certified Professional exam.

We’ll certainly get to end user profile management and VDI layering at some point soon in this blog!

Speaking of VDI, Teradici released new firmware for zero client endpoints for both the Tera 1 and Tera 2 chipsets. This is a pretty important release if you’re running Horizon View 6 or playing with Amazon Workspace. TEST it in your environment before release though, as I’ve heard of performance problems if you haven’t upgraded to Horizon View 6 yet… and some connection problems if you load balance your connection servers via NLB or a hardware loadbalancer.

Link to the firmware: https://techsupport.teradici.com/ics/support/DLRedirect.asp?fileNum=1504&deptID=15164

.
One of my colleagues (And pretty swell guy overall) has had some fantastic results using EMC’s all-flash XtremIO Array and compression/deduplication- A linked clone pool of 1050 desktops was only using 690G of storage… a compression/dedup ratio of 5.1:1 … pretty nifty.
Of course, that’s just initial pool creation. I’m curious to see what the storage utilization looks like after it goes into production.

I’m building a physical server for my current client- and not one that runs a hypervisor. It feels weird. I’ve been such a pro virtualization guy for so long that the last server that I popped a Windows Server installation disk in was destined to be a SQL super computer maybe 24+ months ago. Oh well.

For the record, while the task is pretty specialized I’m pretty sure it could be virtualized. The limiting factor here is the server requires a pretty huge PCI-E card, and the client is running Cisco UCS blades that can’t handle it.

Speaking of UCS: I’m not more than entry-level skilled on the ways of Cisco UCS hardware – The team at Varrow have some UCS superstars that I rely on to get the hardware set up right. I’m going to lean on them a little bit as I learn because my current client is running 1 or 2 nics in each blade for ESXi (I would expect at least 6- 2 for management, 2 for vMotion and 2 for VM traffic) and no QoS. I want to try to take care of those issues before I leave or there will be some network bottlenecking as they grow to their intended scale. Infrastructure plan and designs are important!

Advertisements