Sometimes it is not the cost of having something that is important. Sometimes the cost of not having is far greater.
Back in the dim and distant past I was a server support engineer. Once a month I would have my week of being on call for the organization. A global company with around 5,000 staff in the UK where I lived at the time. Every week I would get the midnight call to deal with a hung server in a remote office. One particular server would hang every <REDACTED> week. The problem was that this server had no remote management capability. The business unit who bought it saved $500 by not buying the card. So at 1 am I would get in my car and drive an hour to the office, an office I never visited during daylight. Once there security would escort me to the datacenter and I’d power cycle the server. Then I’d drive an hour home and go back to bed. Lots of other servers might hang during the week. Most of them had ILO or DRAC or RSA cards that allowed me to reset them without leaving home. Paying me overtime (lucky contractor) to reset this one server cost the company $200 per month for two years, all to save $500.
A recent presentation started me on thinking about why we document our environments. We all hate writing the as built type documentation and seldom update it as things change so why do it? The fundamental reason is so that when things aren’t working we know what “working” was. Then we compare current to documented working and remediate until all is working again. The problem is that the documented working is seldom the actual last working state, just the last documented. The other problem is identifying the current, not working, state to sufficient detail.
Both these issues could be addressed if we could gather the entire state of the environment using an automated tool. We gather the actual working state on a scheduled basis, ensuring we have a frequent picture of real state. Then when something isn’t working we capture current state again and compare to last known working, or last couple of documented states. Provided we have gathered the right data it should be possible to identify the causal change or at least be certain of the causal area.
This train of thought was triggered by a presentation at the Tech Field Day Extra at Cisco Live! last week.
Disclosure: Tech Field Day (TFD) have paid for my flights to attend events in June. TFD have not requested this blog post nor will they see it before it is published. TFD also paid for my lunch before the briefing at Cisco Live! and at least one bagel at the event.
The presentation was by NetBrain, and showed their product dynamically building up a network map as part of a troubleshooting activity. Their application dynamically builds up rich maps of networks by retrieving real time configuration from the devices. A huge amount of information was pulled from the network devices and displayed in a format that looks a lot like a Visio diagram. It was this immediate and focussed diagramming that had me thinking about why we keep documentation. I imagine that if I had to manage a moderately complex network I would love the sort of easy access to current state that NetBrain delivers. The delegate at the presentation who do look after large networks seemed pretty impressed, as they write about NetBrain you will see articles appear on this page which also has links to videos of the NetBrain presentation.
Today held some exciting news for me, my first course was published on Pluralsight. My course is about the new features of Horizon View 6.0. So it covers things like the upgrade process from View 5 as well as the vastly improved support for RDSH among a few other areas.
You may recall that last July I made a big change in my career, as part of a change I see in the way people learn. Producing courses for Pluralsight is a part of that change. Businesses are expecting their staff to manage a lot of their own training and to do it on their own time. The Pluralsight model of a monthly subscription and access to vast amounts of training is a great help for staff who must manage their own training.
If you’re a Pluralsight subscriber and want to know more about View 6 take a look at my course. If you aren’t a subscriber then there are trial options to give you a taste before you try the full subscription.
As you probably know I spend a lot of time travelling to interesting cities and making video of interesting people in the IT infrastructure business. This month it was my first trip to Vancouver where the openStack Summit was held last week.
With my amazing crew of vBrownBag team members and a lot of OpenStack community members we made 85 TechTalk videos over four days. You can find the schedule and links to all the videos here. Alternatively go straight to the playlist of all the videos on YouTube.
I think that inline primary data deduplication is going to be a standard feature of storage arrays in the near(ish) future. Even for storing transactional workloads like virtual machines and tier-1 applications. As many storage experts will tell you the best IO is the one you don’t do. Once you have a copy of a unique data block, never needing to write the block to storage again is going to avoid a lot of IO. Inline deduplication means only writing unique data to disk once.
I spent a lot of April working on AutoLab and with Ravello Systems to get AutoLab available form public cloud.
I have just released AutoLab version 2.6, which has support for vSphere 6. This is the fastest I’ve released support for a new release of vSphere and it was made possible in part by VMware finally making automated vCenter installation easy and partly by the support of Ravello. The documentation for automating the installation vCenter on Windows is in this pdf. Essentially you create a JSON file, either by recording an install or by editing examples, and pass that file to the installer. Another nice feature is that the installer does a pre-flight check and if the check fails it leaves a log of the fatal error. This error log helped me identify a fault in the DC build that was preventing vCenter installing.
Since vSphere 6 doubles the minimum RAM required for AutoLab, compared to vSphere 5.0, you may not be able to reuse your existing lab host. This is where Ravello can be really helpful. You can use their platform to deploy AutoLab on top of public cloud from either Amazon or Google. Rather than buying an AutoLab host you can rent one from Ravello. You still get all the rebuildability of AutoLab, but you need only pay for the resources while you are running AutoLab. In my tests it was costing under $3 per hour to run an AutoLab, if you only need a few hundred hours of AutoLab this is going to be a great option. Also if you need to scale up your AutoLab for a while you can simply pay for the extra resources while you need it, scaling back when the need is over. The final Ravello benefit may be unique to me, I often want to run multiple AutoLab instances and find I don’t have enough hardware, so again renting the cloud hardware for the duration of the project is a great option.
It has finally happened, I will be in Wellington when the Wellington VMUG is meeting.
I will be speaking about progressing your career in IT.
You can register here http://www.vmug.com/e/in/eid=1868
Joining me presenting is the unstoppable Rawlinson Rivera talking about VSAN.
I really should have posted this a week or two ago, the meeting is tomorrow!!
It is getting to the exciting anticipation stage. Tomorrow I leave home for the first leg of my Hyperconverged World tour with SimpliVity. I will be speaking about how we came to need Hyperconverged solutions. I will be joined on stage by Ron Wang who is a Solution Architect, he will demonstrate the SimpliVity solution. We will also have Scott Morris who is SimpliVity’s VP for Asia Pacific and Japan to close the proceedings and draw the recipient of the prize, an iPad Mini.
There are still places available, so click your city below to register:
I’m looking forward to this whirlwind tour, hopefully I can catch up with a few of my friends in the region.
In December I and a few other bloggers spent a couple of days in Boston as the guest of SimpliVity. We spent a while talking about their products now and what we thought would be good for the future. I enjoyed my time there and enjoyed both the very cool product and the great people. Now I’m thinking of those same people’s homes with a few feet of snow, while I’m enjoying New Zealand’s summer.
A very interesting part of the SimpliVity solution is a custom add-in card. This has my electronics design background excited. Gate arrays and special purpose hardware is fun stuff. If you thought that HyperConverged meant that hardware wasn’t important then the SimpliVity architecture might be a surprise.
I’m now delighted to be joining SimpliVity for a HyperConverged Global Tour, or at least the part that is in my region. I will be talking about the why and what of HyperConvergence. I’ll be joined by people who know far more about SimpliVity to talk about their products.
I’m looking forward to my eleven city tour with SimpliVity, particularly starting with Manila and Jakarta in March. I’ve never been to either of these cities and visiting new cities is always interesting. The tour continues with KL and Singapore. Later the tour will cover major cities in Australia and New Zealand from April into May. Book your seat at the presentation here.
For those of you outside the Asia Pacific region there will be tours in the US and Europe. Scott Lowe (not the one who works at NSX) will be speaking in 20 cities in the US. In Europe the language diversity means there will be several community people presenting, Chris Evans will cover the English speaking countries. It’s a shame I won’t get to go to their regions, these guys are worth listening to.
As I deployed Workspace I was a little concerned that the only administrator user ID is the one that you use to bind Workspace to AD. Naturally this is a service account and has a complex password. Remembering the service account password in order to manage the portal is unacceptable. I also couldn’t see anything in the administration guide about how to make another user and administrator. Happily a VMware support engineer showed me the very easy way to promote a user to an administrator of the portal.