I wrote before about why I decided to put an application into a Docker container. Today I’m going to cover a bit more of the how. I originally wrote the application fast and dirty. It was my first serious use of Python and my first distributed system application. Like many coders, I used what I knew rather than losing time learning lots of new things at the same time.
I am back at the front of the classroom, although in this case, it is an online classroom. Starting in February I will be teaching a course on “Operations with VMware vSphere” for Safari Books Online. If you are a Safari subscriber then you can register here on the site The course is included in the Safari subscription, no additional fee to attend.
Of course, if you read this blog you are unlikely to be the audience for this course. I will be covering using VMs for applications rather than building and operating the vSphere infrastructure. The course is scheduled to run every two months, provided there is audience demand. So please promote the course to your application teams, it will help them to be better citizens of your virtual infrastructure.
Next week is another great Tech Field Day event for me. This time it is Tech Field Day 13 in Austin. It seems that I only attend odd numbered TFD events, TFD9, VFD3, VFD5, TFD11, SFD11, and now TFD13. I don’t think that here is any conspiracy here, just seems to be how it works out.
It has been a few months since I last got an update for SimpliVity, so the launch of their OmniStack V3.61 was a good reason for an update.
• There are some nice manageability updates in this version. You can now take an existing SimpliVity cluster and convert it into a stretch cluster. The SimpliVity Data Virtualization Platform will redistribute the VM data copies across the nodes. Then advise you when it is safe to relocate half the nodes to another datacenter.
• The deployment and upgrade processes have been simplified and made more robust. They now use what I call a “Trust and Validate” approach to verify all the configuration settings provided by the customer.
• I wasn’t surprised to hear that VDI has been a huge growth area for SimpliVity. Their first VDI Reference Architecture had some amazing numbers for user density, which lead to great cost per user desktop. In this release, they have a VDI license model for OmniStack. A discounted license cost that disables the backup and replication features. These features are seldom used in VDI deployments. SimpliVity has also published (or are about to publish) Reference Architectures for a few VDI platforms: Epic healthcare, Workspot, and Citrix XenDesktop 7.11.
• One aspect of SimpliVity that I like is the use of compute nodes. There are hypervisor hosts that consume the SimpliVity datastores. Essentially delivering more compute capacity to the cluster. SimpliVity support compute nodes as a normal part of a deployment, rather than just as a migration process.
• The other news from SimpliVity is that their partnership with Huawei has been very successful. This Chinese company’s name is not well accepted in the US. The rest of the world has a lot of Huawei equipment, particularly in telco businesses. Expect to see more models of Huawei servers supported as OmniStack nodes in the future.
Disclosure: in 2015 & 2016 SimpliVity was a customer of mine. I have written training materials and white papers for them, I also have many friends at SimpliVity. SimpliVity did not solicit or have any control over this blog post (or any other on demitasse.co.nz)
Each month I write articles that are published on other sites, which is a large reason why I don’t write so much on this blog. It looks like I only got one blog post here in December, but had at least 7 more articles published. I write most weeks on TVP strategy, this writing is fun as there is minimal editorial control of the content, I write about what interests me.
Over on TechTarget the content is more controlled. I work with TechTarget editors to agree topics. Most of my editors are happy if I go somewhere a little different with topics, so I still get some control.
I’ve been reading and writing about Docker for a while. Recently I started using Docker for a project, getting hands-on to solve real problems is my favorite way to learn. Like most infrastructure people I don’t consider myself a developer. Sure, I write a lot of scripts. But I know almost nothing about proper software development. For my workload generator project, I was using Ansible to manage the build of a bunch of Ubuntu machines. These machines exist simply to run some workload scripts. My first workload is a script written in Python; it does some file server access and emailing.
I last talked to SolarWinds at my first TechField Day event, way back in 2013. I remember being impressed with the depth of geekiness at the company. Their people have deep math skills to help the product analyze the data that it gathers. Today’s briefing was with my friend Gina. She recently joined SolarWinds as part of the Server & Application Monitoring (SAM) product marketing team. The first thing is that SolarWinds have a suite of products, in fact, three suites. The product set that we looked at was the one for deployment inside enterprises, with an overarching name of Orion. The Orion part is a unified management console for the main enterprise products, of which SAM is one. SAM looks like a pretty good tool. It delivers lots of actionable information without getting too stuck in the mire of metrics. There is an application template concept. A pre-built set of configuration for monitoring common applications. The templates allow customization but are based on a set of standard monitoring best practice for each application. Even better, in the SolarWinds Thwack community, you can share templates. I also like monitoring tools that make it easy to take corrective action. Rather than just informing you they allow you to fix issues from within the monitoring tool. Simple actions like restarting services are built in, as are more complex actions like resizing VMs. Some actions can also be scheduled, so a VM can be resized during the maintenance window. I would like to see this closing of the loop more automated. Having the monitoring system restart services or migrate VMs automatically will speed up fault resolution or avoidance. Naturally, there will need to be rules around automated changes to VMs and applications. But in the end, we must accept that machines can follow rules better than people. People still need to set the rules to suit the business needs.
The whole SolarWinds Orion suit looks really nice. I hope to make time to get some of the products deployed into my lab this week and may have more to say soon.
I’m working on a fun little project while I’m home for a few weeks. For a new vBrownBag project. I need to create a VM based workload generator. What I want is a collection of VMs that generate some significant workload for a virtualized infrastructure. I also want to be able to have different workloads that exercise the infrastructure in different ways. I have a few design criteria for both the workload itself and the method of deploying the workload.
It is the unknown unknowns that are the scariest. Those days when a server unexpectedly runs out of disk space. It usually happens to me because I have a server connected to my Dropbox account. With the folder that the vBrownBag crew uses to make TechTalks at conferences. Last week at OpenStack summit we made 20GB of video, so my Dropbox folder grew by 20GB in three days. When this has happened in the past my server stopped syncing with Dropbox to avoid completely filling the drive. As a result, my Mac and my VDI desktop didn’t the same files, which gets in the way of my business workflows.
I last talked to Atlantis Computing at Virtualization Field Day 3 in 2014, at the time that they were releasing their USX data platform. I read about the release of their HyperConverged platform, HyperScale in 2015. HyperScale is a software HCI that is delivered on top of a very restricted list of partner hardware. There were four hardware partners and each essentially offered two all-flash configurations. One at 12TB and the other 24TB capacity per node. I missed the announcement this year of a ROBO scale version, 4TB and the addition of Dell as a partner.
One of the nice things about the Atlantis HCI is that you can integrate with Atlantis USX on other hardware. This is a good way to either extend your existing hardware into your new HCI deployment. Or migrate your VMs off your existing hardware and onto Atlantis HyperScale. This integration and migration stuff is not what most HCI vendors want to talk about, and is a real benefit of a software HCI product. This process of migrating onto an HCI platform can be very time-consuming and may have its own restrictions if you don’t already have 10GBE in use.
The new stuff that was in the briefing is not yet announced, expect to hear interesting things from Atlantis over the next month. Maybe even at one of the conferences that are running next week. When that part becomes public I may have some more to say about Atlantis.