Monthly Archives: September 2015

VMWare View Horizon: Things I wish I knew earlier

By | SysAd | No Comments

In working with VMWare Horizon View over the years and through many different versions we’ve often stumbled across problems or bugs that were worth buckling down and trying to resolve because the rest of the release was otherwise stable.  4.6 was a great release.  5.1 was the next version that we stayed on and now it looks like 6.1 will be around for a while.  Finding solutions for problems like USB ports not redirecting or sources busy errors if the console has been left logged into have been very version specific.  However, through all of the versions of View that we’ve used there are a few things that I wish I would have known sooner that apply to any of them.

Linked Clones can’t be moved between datastores without refreshing or expanding to full virtual machine

Moving a linked clone pool the View Admin console will automatically cause each Virtual Desktop to refresh and there isn’t any other way to move them.  If you absolutely need to move a Linked Clone Virtual Desktop’s storage targeted for some reason, you can blow it up into a full stand-alone Virtual Desktop and add it to a separate pool.

PCOIP has the potential to use tons of bandwidth – lock down per client bandwidth limits

I’ve witnessed a full screen video across pcoip completely saturate a 50 mb circuit (and the video was still a little choppy).  No matter how small a remote office is or how much bandwidth is available, we always implement traffic shaping rules to prevent a single user becoming an internet hog.  I usually start at 40% maximum bandwidth utilization per device for the entire circuit for offices of 15 users or less.  That way, it takes 3 devices to overload the internet connection.  This change has helped reduce the number of “slowness complaints” for users of our Virtual Desktops dramatically.

If a user resets their Virtual Desktop while it’s starting up it can send a Windows 7 machine into startup repair

We were chasing this issue down for quite a while.  We would come across a Virtual Desktop where the user was reporting that they could not log in and it was stuck in a startup repair loop.  What was happening was a user would restart their Virtual Desktop for whatever reason and then try to log back on right away.  Getting an error message along the lines of “no sources available” they would click okay and then be taken to a screen on their Teradici thin client with a button “Reset VM”.  The user would click the button, sending the command to reset their dedicated Virtual Desktop through the View Connection Server to the Vsphere server and the virtual machine would restart mid boot.  The next time it came up it would go into startup repair.  This issue was easily resolved once we figured out what was happeneing with the commands listed below.

From an elevated Command Prompt (cmd – right click – run as administrator)

bcdedit /set {default} bootstatuspolicy ignoreallfailures

bcdedit /enum

 

 

 

Why Devops?

By | SysAd | No Comments

Running a managed IT service like ours means we generate a tremendous amount of logging and diagnostic data.  Most recently we’ve been using a fantastic little service called Papertrail to collect and present those logs in a searchable way.  As our service has grown our needs have grown as well, and now we’re looking for new ways of making those logs useful to our ops team using more sophisticated analytics and heuristics.  Enter the ELK stack (ElasticSearch, Logstash, Kibana.)  ELK has gained tremendous popularity in recent years for a variety of reasons, however, we’ll save that for another post.  This post is about Devops, and why it makes sense for even small organizations or individuals to learn.  “Devops” is a huge and hotly contested term, but loosely speaking it is the idea that you can treat infrastructure operations “ops” or “systems administration” as a developer treats code, using many of the same tools and patterns.  We’ll leave out the sociological and team management aspects for this post, and focus on it’s benefits to the traditional systems administrator.

I started deploying our new ELK stack in my spare time as a little side-project, and made the decision early on to use Docker (www.docker.io) to containerize the applications.  Presently Docker, if you’re not familiar with it, is the poster-child for the entire Devops movement.  As I was working on this I ran into a snag where I didn’t fully understand how ports within the containers are mapped through to the host operating system.  This caused several hours of grumbles, until finally Matt Horton, another member of our ops team came over to ask me what was wrong.  As I explained, I mentioned that I was using Docker for the deployment.  He said “Oh, why didn’t you just stand up the services on a normal server?  Why are you using Docker for this?”

Which is a fantastic question.  At the time the best I could manage was “well, not using Docker wouldn’t be very Devopsy(tm), would it?”

But it got me thinking.   I’ve got the same feeling about the Devops methodology that I had about server virtualization back in 2004, and I expect the impact of of Devops (regardless of the specific tool sets used) will have an even more profound effect on the way IT is delivered in the coming decades.  But that feeling isn’t a good enough answer to the question:  Why was I using Docker for something that could just as easily have been installed directly into the OS?  Why do I think that taking a Devops approach to solving this problem when it would have been faster and simpler to just install and configure the software by hand?

  1. Because I knew I would screw it up the first time I tried to get things deployed.  Having never deployed an ELK stack before, I knew that inevitably after I got it setup the first time I’d want to do it again, better.  Then again, even better.  The reality of IT operations is that most of our work is iterative, that is, we don’t necessarily know the right way to do something until we’ve done it several times different ways.  Using Docker and a Devops methodology means that my environment is built from scratch every time I run it, so all I need to do to tweak the way I do something is change the file that specifies the way the environment is to be built.  This gives you an incredibly powerful tool to rapidly optimize when doing something you haven’t done before.
  2. It was incredibly easy to get started.  One of the beautiful aspects of the Docker ecosystem in particular is the extensive library of pre-built Docker images you can work from.  Getting started with the elk stack was as easy as cloning a git repository from someone who had done the work already (if you are interested, this is what we used for a starting point for this project).  And per my next point, it was very easy for me to understand each step of what building a functional ELK environment was simply by examining the Dockerfile.  Rather than having to read a bunch of individual how-to documents and synthesize them into something that would ultimately be largely irreproducible.
  3. Your projects become self-documenting.  One of the most important things I learned early in my career was the importance of good documentation.  Checklists, notes, all have tremendous value when supporting complex systems.  Devops does away with the necessity for much of the tactical-level documentation ops is responsible for, as the tools generally use lists of human and machine readable insturctions to define what should be done.

Frankly, it’s more fun.  Most (hopefully) of us do this job because we love learning new things, designing systems and watching them work, and figuring out how to use those systems to solve problems for people.  While doing IT the old fashioned way was also fun, being able to iterate on your designs so quickly really takes it to another level.