The Garden of Business IT – Growing Your Own or Farming IT Out

By | Business, Outsourcing | No Comments

We live in a world of outsourcing.  Can you think of any tasks you outsource? At home, do you groom your own pets? Does your energy come from solar? Do you raise your own chickens for eggs and chicken parmigiana? Do you grow all of your own vegetables? When it comes to technology, do you build your own smartphone? What about computer maintenance?

The popularity of outsourcing our everyday needs has been on the rise ever since the industrial age. Today, we live in the age of technology. New outsourcing solutions evolve as our technology expands. Why should we outsource the production of our veggies to a third party like farmers? Why should you outsource your business’ IT needs? Outsourcing has its pros and cons, and we will look at some factors you should consider in this article. To illustrate this point, imagine planting a garden.

So you want to grow your own garden.

Great! Here are some of the benefits of having your own little garden:

  • You get to plant the seeds and watch them grow
  • You are literally able to taste the fruits of your labor
  • You can observe all the processes that the plant goes through
  • You feel proud when you serve a home grown salad on your dinner table

Everything seems good so far. Why would someone buy their vegetables from the store? Here are some obvious and not so obvious reasons:

  • You can find food in several convenient locations like famers’ markets and stores
  • You avoid the expense of buying materials
  • You can find a wide variety and abundant supply of vegetables
  • You might not have the knowledge, the time or energy resources to grow your own
  • You have no way of guaranteeing that your garden will yield the results you want
  • You may not have enough room for gardening on the scale you’d like, if at all
  • If you run into a pest or fungus problem you don’t know how to handle, it may destroy your garden

What does this have in common with business IT? Well for starters, planting your own garden is like handling your own IT in house, and requires some of the same planning skills you’d apply to creating your garden.

  • You get to build your own IT systems and management from the ground up (no pun intended)
  • You work out the kinks in your own systems and reap the rewards of your own hard work
  • You are able to directly oversee the hardware, servers, and maintenance of your IT
  • You feel proud when you grow from your mistakes and learn new things

What about outsourcing your business IT? Outsourced IT is alive and flourishing in many industries. Why?

  • With outsourced IT, you get state of the art security; your data is secured in elite data centers with advanced security measures
  • Money spent on the hardware and maintenance costs of your hardware infastructure can be dramatically reduced by utilizing cloud based systems
  • The time you spent managing your IT can now be used to focus on the growth of your business
  • Scaling your hardware and software to accommodate growth is a seamless process
  • You have a team of specialists who are expert at keeping your systems safe, secure, and efficient

Given all the pros and cons of keeping your produce or IT in house, what would you choose? Both are viable options. The real question is, what is best for your business? We recommend that you outsource your IT if your needs outgrow your ability to manage them in house. We encourage you to research your options further before making any big decisions. Think of the prospect of managing your own IT needs in house in the same way that you would consider planting a garden to meet all your food needs.

 

 

 

 

vSphere Appliance 6 update 1b issues – Regenerate your certificates!

By | SysAd, Uncategorized, VMware | No Comments

Recently we decided to upgrade one of our clusters to the latest vSphere/ESXi 6.0 U1b.  While it’s early in the release cycle to apply these fixes to a production system, we’ve been having some issues with this cluster we were hoping the upgrade would resolve.  This cluster has 4 hosts running VSAN for storage, and is primarily used for VDI.  We use App Volumes pretty extensively.  This cluster uses the certs that were generated when the appliance was deployed.

Last Friday night, we mounted the ISO and ran the update.  Following the update things seemed fine for a while other than an apparently cosmetic “This node cannot communicate with all the other nodes in the VSAN cluster” in the VI native client which is documented in this VSAN Forum discussion.  However, after approximately 2 hours online, the vpxd service would end up unresponsive and VI and Web Client tools would hang.  It would intermittently come back, but you could only click on one or two things in the VI Client before it would become unresponsive again.  VCS servers would be unable to execute power operations.  If we reboot the VCSA, it would be responsive for a few hours until hanging again.  None of this behavior presented itself before the upgrade to U1b.

We opened up an SRR with VMware support, but after hours looking at logs the best they could suggest was rebuilding the applicance from scratch and re-importing our database from the broken appliance.  Ultimately, we started looking at SSO as the probable cause of our issues, we noted our SSO logs appeared much larger than they should be.  At the moment the appliance became unresponsive the SSO service started failing authentication requests.  Given that VMware’s solution was to rebuild the appliance, we decided we had little to loose by attempting to regenerate all of the certificates using the certificate-manager utility.  After giving that a go, the problem has resolved.  Our best guess is that one of the solution certificates was, to use the technical term “borked,” and that U1b either has some new throttling in place or handles broken solution certificates differently than 6.0a.

We’ll continue troubleshooting with VMware to attempt to determine the underlying cause and will update this post if we learn more.  It seems as though the common wisom was true, at least this time: “If thouest have performance issues with vSphere, SSO and certificates are thy cause.”

VMWare View Horizon: Things I wish I knew earlier

By | SysAd | No Comments

In working with VMWare Horizon View over the years and through many different versions we’ve often stumbled across problems or bugs that were worth buckling down and trying to resolve because the rest of the release was otherwise stable.  4.6 was a great release.  5.1 was the next version that we stayed on and now it looks like 6.1 will be around for a while.  Finding solutions for problems like USB ports not redirecting or sources busy errors if the console has been left logged into have been very version specific.  However, through all of the versions of View that we’ve used there are a few things that I wish I would have known sooner that apply to any of them.

Linked Clones can’t be moved between datastores without refreshing or expanding to full virtual machine

Moving a linked clone pool the View Admin console will automatically cause each Virtual Desktop to refresh and there isn’t any other way to move them.  If you absolutely need to move a Linked Clone Virtual Desktop’s storage targeted for some reason, you can blow it up into a full stand-alone Virtual Desktop and add it to a separate pool.

PCOIP has the potential to use tons of bandwidth – lock down per client bandwidth limits

I’ve witnessed a full screen video across pcoip completely saturate a 50 mb circuit (and the video was still a little choppy).  No matter how small a remote office is or how much bandwidth is available, we always implement traffic shaping rules to prevent a single user becoming an internet hog.  I usually start at 40% maximum bandwidth utilization per device for the entire circuit for offices of 15 users or less.  That way, it takes 3 devices to overload the internet connection.  This change has helped reduce the number of “slowness complaints” for users of our Virtual Desktops dramatically.

If a user resets their Virtual Desktop while it’s starting up it can send a Windows 7 machine into startup repair

We were chasing this issue down for quite a while.  We would come across a Virtual Desktop where the user was reporting that they could not log in and it was stuck in a startup repair loop.  What was happening was a user would restart their Virtual Desktop for whatever reason and then try to log back on right away.  Getting an error message along the lines of “no sources available” they would click okay and then be taken to a screen on their Teradici thin client with a button “Reset VM”.  The user would click the button, sending the command to reset their dedicated Virtual Desktop through the View Connection Server to the Vsphere server and the virtual machine would restart mid boot.  The next time it came up it would go into startup repair.  This issue was easily resolved once we figured out what was happeneing with the commands listed below.

From an elevated Command Prompt (cmd – right click – run as administrator)

bcdedit /set {default} bootstatuspolicy ignoreallfailures

bcdedit /enum

 

 

 

Why Devops?

By | SysAd | No Comments

Running a managed IT service like ours means we generate a tremendous amount of logging and diagnostic data.  Most recently we’ve been using a fantastic little service called Papertrail to collect and present those logs in a searchable way.  As our service has grown our needs have grown as well, and now we’re looking for new ways of making those logs useful to our ops team using more sophisticated analytics and heuristics.  Enter the ELK stack (ElasticSearch, Logstash, Kibana.)  ELK has gained tremendous popularity in recent years for a variety of reasons, however, we’ll save that for another post.  This post is about Devops, and why it makes sense for even small organizations or individuals to learn.  “Devops” is a huge and hotly contested term, but loosely speaking it is the idea that you can treat infrastructure operations “ops” or “systems administration” as a developer treats code, using many of the same tools and patterns.  We’ll leave out the sociological and team management aspects for this post, and focus on it’s benefits to the traditional systems administrator.

I started deploying our new ELK stack in my spare time as a little side-project, and made the decision early on to use Docker (www.docker.io) to containerize the applications.  Presently Docker, if you’re not familiar with it, is the poster-child for the entire Devops movement.  As I was working on this I ran into a snag where I didn’t fully understand how ports within the containers are mapped through to the host operating system.  This caused several hours of grumbles, until finally Matt Horton, another member of our ops team came over to ask me what was wrong.  As I explained, I mentioned that I was using Docker for the deployment.  He said “Oh, why didn’t you just stand up the services on a normal server?  Why are you using Docker for this?”

Which is a fantastic question.  At the time the best I could manage was “well, not using Docker wouldn’t be very Devopsy(tm), would it?”

But it got me thinking.   I’ve got the same feeling about the Devops methodology that I had about server virtualization back in 2004, and I expect the impact of of Devops (regardless of the specific tool sets used) will have an even more profound effect on the way IT is delivered in the coming decades.  But that feeling isn’t a good enough answer to the question:  Why was I using Docker for something that could just as easily have been installed directly into the OS?  Why do I think that taking a Devops approach to solving this problem when it would have been faster and simpler to just install and configure the software by hand?

  1. Because I knew I would screw it up the first time I tried to get things deployed.  Having never deployed an ELK stack before, I knew that inevitably after I got it setup the first time I’d want to do it again, better.  Then again, even better.  The reality of IT operations is that most of our work is iterative, that is, we don’t necessarily know the right way to do something until we’ve done it several times different ways.  Using Docker and a Devops methodology means that my environment is built from scratch every time I run it, so all I need to do to tweak the way I do something is change the file that specifies the way the environment is to be built.  This gives you an incredibly powerful tool to rapidly optimize when doing something you haven’t done before.
  2. It was incredibly easy to get started.  One of the beautiful aspects of the Docker ecosystem in particular is the extensive library of pre-built Docker images you can work from.  Getting started with the elk stack was as easy as cloning a git repository from someone who had done the work already (if you are interested, this is what we used for a starting point for this project).  And per my next point, it was very easy for me to understand each step of what building a functional ELK environment was simply by examining the Dockerfile.  Rather than having to read a bunch of individual how-to documents and synthesize them into something that would ultimately be largely irreproducible.
  3. Your projects become self-documenting.  One of the most important things I learned early in my career was the importance of good documentation.  Checklists, notes, all have tremendous value when supporting complex systems.  Devops does away with the necessity for much of the tactical-level documentation ops is responsible for, as the tools generally use lists of human and machine readable insturctions to define what should be done.

Frankly, it’s more fun.  Most (hopefully) of us do this job because we love learning new things, designing systems and watching them work, and figuring out how to use those systems to solve problems for people.  While doing IT the old fashioned way was also fun, being able to iterate on your designs so quickly really takes it to another level.

Our flexible working future

By | Family | No Comments

Great article by Nokia on how the reality and necessity of the workplace is changing:

Home work:  5 reasons why flexible working is the future

It hit close to home not only because of Symbio’s product, but because my wife and I both made the transition to working primarily at home with the birth of our first daughter. Being able to access all our corporate systems as if we were in the office has made it possible for both of us to be far more present while our children are young without giving up the fulfillment or income of our careers.

Technology like Virtual IT has made it easier than ever to work flexibly, while still maintaining the collaboration and cohesion of an office workspace.