It has been a few months since I last got an update for SimpliVity, so the launch of their OmniStack V3.61 was a good reason for an update.
• There are some nice manageability updates in this version. You can now take an existing SimpliVity cluster and convert it into a stretch cluster. The SimpliVity Data Virtualization Platform will redistribute the VM data copies across the nodes. Then advise you when it is safe to relocate half the nodes to another datacenter.
• The deployment and upgrade processes have been simplified and made more robust. They now use what I call a “Trust and Validate” approach to verify all the configuration settings provided by the customer.
• I wasn’t surprised to hear that VDI has been a huge growth area for SimpliVity. Their first VDI Reference Architecture had some amazing numbers for user density, which lead to great cost per user desktop. In this release, they have a VDI license model for OmniStack. A discounted license cost that disables the backup and replication features. These features are seldom used in VDI deployments. SimpliVity has also published (or are about to publish) Reference Architectures for a few VDI platforms: Epic healthcare, Workspot, and Citrix XenDesktop 7.11.
• One aspect of SimpliVity that I like is the use of compute nodes. There are hypervisor hosts that consume the SimpliVity datastores. Essentially delivering more compute capacity to the cluster. SimpliVity support compute nodes as a normal part of a deployment, rather than just as a migration process.
• The other news from SimpliVity is that their partnership with Huawei has been very successful. This Chinese company’s name is not well accepted in the US. The rest of the world has a lot of Huawei equipment, particularly in telco businesses. Expect to see more models of Huawei servers supported as OmniStack nodes in the future.
Disclosure: in 2015 & 2016 SimpliVity was a customer of mine. I have written training materials and white papers for them, I also have many friends at SimpliVity. SimpliVity did not solicit or have any control over this blog post (or any other on demitasse.co.nz)
Each month I write articles that are published on other sites, which is a large reason why I don’t write so much on this blog. It looks like I only got one blog post here in December, but had at least 7 more articles published. I write most weeks on TVP strategy, this writing is fun as there is minimal editorial control of the content, I write about what interests me.
Over on TechTarget the content is more controlled. I work with TechTarget editors to agree topics. Most of my editors are happy if I go somewhere a little different with topics, so I still get some control.
I’ve been reading and writing about Docker for a while. Recently I started using Docker for a project, getting hands-on to solve real problems is my favorite way to learn. Like most infrastructure people I don’t consider myself a developer. Sure, I write a lot of scripts. But I know almost nothing about proper software development. For my workload generator project, I was using Ansible to manage the build of a bunch of Ubuntu machines. These machines exist simply to run some workload scripts. My first workload is a script written in Python; it does some file server access and emailing.
I last talked to SolarWinds at my first TechField Day event, way back in 2013. I remember being impressed with the depth of geekiness at the company. Their people have deep math skills to help the product analyze the data that it gathers. Today’s briefing was with my friend Gina. She recently joined SolarWinds as part of the Server & Application Monitoring (SAM) product marketing team. The first thing is that SolarWinds have a suite of products, in fact, three suites. The product set that we looked at was the one for deployment inside enterprises, with an overarching name of Orion. The Orion part is a unified management console for the main enterprise products, of which SAM is one. SAM looks like a pretty good tool. It delivers lots of actionable information without getting too stuck in the mire of metrics. There is an application template concept. A pre-built set of configuration for monitoring common applications. The templates allow customization but are based on a set of standard monitoring best practice for each application. Even better, in the SolarWinds Thwack community, you can share templates. I also like monitoring tools that make it easy to take corrective action. Rather than just informing you they allow you to fix issues from within the monitoring tool. Simple actions like restarting services are built in, as are more complex actions like resizing VMs. Some actions can also be scheduled, so a VM can be resized during the maintenance window. I would like to see this closing of the loop more automated. Having the monitoring system restart services or migrate VMs automatically will speed up fault resolution or avoidance. Naturally, there will need to be rules around automated changes to VMs and applications. But in the end, we must accept that machines can follow rules better than people. People still need to set the rules to suit the business needs.
The whole SolarWinds Orion suit looks really nice. I hope to make time to get some of the products deployed into my lab this week and may have more to say soon.
I’m working on a fun little project while I’m home for a few weeks. For a new vBrownBag project. I need to create a VM based workload generator. What I want is a collection of VMs that generate some significant workload for a virtualized infrastructure. I also want to be able to have different workloads that exercise the infrastructure in different ways. I have a few design criteria for both the workload itself and the method of deploying the workload.
It is the unknown unknowns that are the scariest. Those days when a server unexpectedly runs out of disk space. It usually happens to me because I have a server connected to my Dropbox account. With the folder that the vBrownBag crew uses to make TechTalks at conferences. Last week at OpenStack summit we made 20GB of video, so my Dropbox folder grew by 20GB in three days. When this has happened in the past my server stopped syncing with Dropbox to avoid completely filling the drive. As a result, my Mac and my VDI desktop didn’t the same files, which gets in the way of my business workflows.
I last talked to Atlantis Computing at Virtualization Field Day 3 in 2014, at the time that they were releasing their USX data platform. I read about the release of their HyperConverged platform, HyperScale in 2015. HyperScale is a software HCI that is delivered on top of a very restricted list of partner hardware. There were four hardware partners and each essentially offered two all-flash configurations. One at 12TB and the other 24TB capacity per node. I missed the announcement this year of a ROBO scale version, 4TB and the addition of Dell as a partner.
One of the nice things about the Atlantis HCI is that you can integrate with Atlantis USX on other hardware. This is a good way to either extend your existing hardware into your new HCI deployment. Or migrate your VMs off your existing hardware and onto Atlantis HyperScale. This integration and migration stuff is not what most HCI vendors want to talk about, and is a real benefit of a software HCI product. This process of migrating onto an HCI platform can be very time-consuming and may have its own restrictions if you don’t already have 10GBE in use.
The new stuff that was in the briefing is not yet announced, expect to hear interesting things from Atlantis over the next month. Maybe even at one of the conferences that are running next week. When that part becomes public I may have some more to say about Atlantis.
People are very interesting. We each perceive the world slightly differently, sometimes very differently. Like most people, I am intrigued by how I think & how other people think. On my last trip to the US, I was thinking about how people perceive difference. This “social construction of difference” is something I learned a little about at University. One aspect is how my accent is a trigger for my friends to notice that I am different from them.
On this trip, the trigger word for my US friends was “schedule” which I pronounce differently to them. I believe I follow my English origins and pronounce as if there were no C, making it “shedule.” My US friends found that they have a K in place of the CH and so pronounce “skedule.” I do wonder where the k came from. I also notice that the extraneous K does not bother me, but the shed bothers my friends. That’s not to say that I am more tolerant, there are trigger words for me. One is solder, a crucial part of assembling electronic devices. My US friends seem not to notice that there is an L in the word, which I find disconcerting. Don’t get me started on router or Aluminium, both troubling words.
These small differences in language are part of how we identify people who are like us and people who are different. Pointing out the differences reinforces both the sense of belonging and the sense of difference. I want to talk a bit more about belonging and difference as well as how I perceive people in some coming blog posts. They will make a nice break from all the vendor briefing blog posts I’ve been doing.
I’ve heard about FalconStor as a storage virtualization platform for a while. I also think that we will see more products that virtualize clouds for mobility in the near(ish) future. The vision that FalconStor lays out is one where data can move freely between clouds. On-premises clouds, managed private clouds, and various public clouds. I definitely see that as a destination that customers will want to get to, but most are nowhere near ready. This is good for FalconStor as they have not yet delivered on their vision, more product development is underway.
What can FalconStor deliver today? Virtualization of your existing iSCSI and Fibre Channel block storage. Replication between dissimilar storage and between sites. Deduplication to reduce WAN costs and public cloud egress costs too. Physical appliances, virtual appliances, and virtual appliances on public cloud. Multi-tenancy for service providers, including authentication integrated with Active Directory or other LDAP. Analytics from the block storage to the application performance. A unified user interface and API across multiple locations for the virtualized storage.
My thoughts on gaps. First, it is block storage-centered. No object or file storage built into the storage virtualization. Next, to use FalconStor for mobility you will be replicating whole operating systems. And application sets within those OSs. I think the future multi-cloud mobility needs to be application centered. Moving only the application and its data without all the re-creatable dependencies. The issue with this vision is that it requires applications to be redeveloped, a very slow and expensive process. For the near future, there will be a need to replicate or transfer whole VMs.
FalconStor has a good vision of multi-cloud data portability and they are executing on making the vision into a product. What I don’t know is whether enterprises see enough value in the current product to provide the income that will be needed to fund developing the vision.
It seems to be the season of object storage; I keep getting briefings on object storage products. At Tech Field Day Extra (TFDx) at VMworld USA, we had a briefing from Scality. Please refer to my standard TFD disclosure post. Scality didn’t spend a lot of time digging into their scale-out storage platform at TFDx. Rather they focused on some new capabilities and a new packaging option for developers wanting to test against a Scality object store. This last is a great move as object storage is usually consumed by applications rather than end users or infrastructure teams. There is a Docker image for a complete Scality object store with an S3 compatible interface. With this Docker image, a developer can use Docker Compose to create a multi-container application with their code and an object store as well as any other supporting containers. Then software testing, including continuous integration, can be completed without needing a production ready Scality deployment in the test environment. I’ve written in other places about how important it is to enable developers and that is what Scality has done. Scality also told us about their version 6.0 product. They have enhanced a lot of the AWS compatibility features. Support for the AWS IAM security model and the AWS command lines will also ease developer adoption. While we didn’t get deep into the product architecture we did get some of the highlights. A multi-Petabyte scale architecture using a scale out
Scality also told us about their version 6.0 product. They have enhanced a lot of the AWS compatibility features. Support for the AWS IAM security model and the AWS command lines will also ease developer adoption. While we didn’t get deep into the product architecture we did get some of the highlights. A multi-Petabyte scale architecture using a scale-out collection of x86 servers. A strict consistency model for data and design guides for solution availability and performance. A RESTful API for monitoring is also nice, allowing visual reporting using Grafana. I expect to hear more about the internals of the Scality product this week at Storage Field Day in Silicon Valley.