Vendor Briefing: Comtrade backup for Nutanix, HYCU

Sometimes it takes a while for a company to come up on the radar, then it keeps coming up. It was at Tech Field Day 11, last year when I first learned of Comtrade and their software development business. Comtrade came up again this year when I was researching the VDI management and monitoring buyer’s guide for TechTarget. This week they reappeared with a new product, a backup product specifically for Nutanix named HYCU. I keep seeing HYCU as HKeyCurrentUser, so it is important to pronounce it like the Japanese poetry, Haiku. HYCU may be a new product, but Comtrade have been developing backup software for a long time, so it has mature thinking behind it. Backup policies require an RPO and retention, as well as an RTO. This last is interesting as backups don’t usually have restore time objectives. The destinations can be local NFS or SMB shares, or remote AWS or Azure storage. By default, HYCU will make its selection of backup destination to respect your configured RTO. A 6TB VM backed up to S3 is unlikely to be restored inside a 2-hour RTO, but from a local NFS server, there is a good chance to meet that RTO. Policies are applied to VMs, VMs are discovered from the Nutanix Prism API. Right now, HYCU only supports the Nutanix Acropolis hypervisor (AHV), but ESXi support is sure to be added soon. Restores can be whole VMs, or file level restores directly into the VM and either overwriting the file or redirecting the restore to preserve the current files. There is also an element of application awareness, HYCU can identify VMs that have SQL Server installed and backup the databases, then restore individual databases to a point in time by rolling the SQL logs forward. To speed up the restores, Nutanix snapshots are used and retained on the VM for a day. This means that a restore can happen immediately but that the backup can be sent to AWS for cheap storage. I like the simplicity of the approach, while still having a fair amount of flexibility.

Hyperconverged is all about simplifying infrastructure management. There is built-in backup and replication with the Nutanix product, but there has been some discussion about whether backups should be on the same storage as the original VM. There are a few more things I would like in the product. The ability to do a monthly compliance/eDiscovery backup that is retained indefinitely is essential if object storage is to replace tapes. I would also like to see integration with the Nutanix Prism interface, and I’m sure it will come. If I can make some time I will have a play with HYCU, there is a trial at tryhycu.com that I imagine will work with Nutanix Community Edition.

Posted in General | Leave a comment

Writing in May

I’m in the middle of some crazy travel. Dell/EMC world in Las Vegas at the start of May then home. Silicon Valley for the Ravello/Oracle blogger briefing last week, home this week. On Sunday I head back over the Pacific for HPE Discover back in Vegas. I don’t plan any more long-haul travel in June, but July, August, and September will all have a lot of miles. This is the result of my choice to live in New Zealand but work largely for US businesses. One of the great things has been seeing so many of my friends on these trips, there is nothing like sharing a meal with a table full of friends.

TechTarget

I wrote for SearchDisasterRecovery about the concept of using Canary files to detect the actions of ransomware, then WannaCry blew up in mass media.

The VDI Management and Monitoring Buyer’s Guide continues. The third article is about what to expect your tools to do and the fourth looks at a few of the top products in the category.

For SearchDataCenter I looked at using data fabrics for cross-cloud mobility.

TVP Strategy

I continued the theme of getting the benefits of Hyperconverged without using hyperconverged. This article focusses on policy-based management, my favorite part of HCI.
I also looked at how inflexible AWS is as an IT provider, they really are the department of NO.

Posted in General | Comments Off on Writing in May

Vendor Briefing – Datrium

Every so often a product comes along that works in a new way and we need to re-learn how to think about building an IT infrastructure. I spent some time with Datrium learning about how their solution is different from other solutions. I think of their product as a scale-out controller with a shared storage shelf. Both hyperconverged and scale-out storage have scale-out controllers and scale-out storage. Hyperconverged uses the same scale-out physical servers to run VMs and scale-out storage uses additional servers. Datrium puts the controller with cache and workload VMs in each scale-out host but uses centralized storage shared by all the hosts.

With Datrium the controllers scale-out and are on the compute nodes, alongside the VMs. Each node has some solid-state storage as a cache but does not have “persistent” storage. All persistent storage is in a data node, separate from the compute nodes. The data node has local disks and NVRAM, but is only accessible through the compute nodes. Think of the data node as a disk shelf, a future release will allow multiple data nodes to be joined together. The compute nodes scale-out, up to 32 compute nodes can access a single data node. A nice feature is the ability to have non-uniform compute nodes. You might have sixteen general purpose compute nodes; dual socket, 256GB of RAM, and 1TB of SSD. Then maybe four nodes that are for large database VMs; quad socket, 1TB of RAM, and 8TB of SSD. All these compute nodes can access the same data node.

Datrium’s architecture provides a lot of scale-out benefits without some of the challenges. In typical scale-out and hyperconverged architectures there is a lot of east-west network traffic between the storage nodes. Data written to one node must also be written to another node, or two, to provide durability. There are also operational and availability issues with having storage capacity in your compute nodes. Taking an HCI node down for maintenance effects the redundancy of your storage, potentially reducing your failure tolerance. With Datrium the compute nodes seldom talk to each other, they almost exclusively talk to the data node. Having a compute node shut down or failed does not change your storage availability and resilience. With both HCI and scale-out you must have a minimum quorum of nodes operational before any storage is available. Datrium need the data node and one compute node to provide a working storage system.

Datrium is also designed to be simple to manage, that is a top value proposition for HCI too. Datrium has very few settings to configure; deduplication, erasure coding, and compression are always enabled, cannot be turned off. The only feature that can be turned on and off is full system encryption. The encryption happens in the compute nodes. Data is encrypted after it is deduplicated and compressed but before it leaves the compute node where the VM IO occurs. Data is encrypted across the storage network and at rest on the data node, no need for self-encrypting hard disks.

This architecture has some interesting consequences. It is going to take me a while to think through and talk about what the benefits are and what the downsides are, there are always downsides. Hopefully I will get to do some more work with Datrium and we will all learn more about their cool product.

Posted in General | Comments Off on Vendor Briefing – Datrium

Vendor Briefing Aparna

What if I told you that you could fit sixty (60) physical servers in a 4U rack chassis? And that the chassis also included redundant switching with multiple 40GB uplinks. That is exactly what Aparna Systems are producing. The servers are a cartridge, the same size as a 3.5” hard disk, with an Intel Xeon CPU, 64GB of RAM, two SSDs and two 10Gbe networks. You can install whatever operating system you want on the nodes. Maybe Linux for containers or KVM, ESXi for a vSphere deployment, even Windows if that floats your boat. The chassis that accommodates sixty nodes is a full-size 4U enclosure, designed to go into server racks in a data center. With all the upstream bandwidth, these chassis are designed to be stacked up in a rack and clustered into massive scale-out server farms. There is also a smaller chassis, a mere 15 servers in 4U. This chassis is much shorter and will fit into communications racks or smaller data center racks. The smaller chassis are more suited to geo-dispersed use, service provider PoPs or industrial automation and analytics.

This is a hardware platform from which to build a cloud. It is not an opinionated, cloud-in-a-box with a defined operating system and orchestration platform. You get a bunch of servers and networking, add your own cloud software. The switches do have capabilities to help you deploy your chosen operating system, but you get to choose what and how you deploy. This is some very cool hardware, continuing the progression from tower servers, through large rack, to pizza boxes and then blades. A cartridge based platform is even more dense. Aparna is still very early, no flash offices. I liked that one cubicle had bare circuit boards pinned to the wall, the team is deep in the hardware development. I would love to see an even smaller chassis, four cartridges, and basic 10Gbe networking. That would be a great platform for ROBO or even home lab. That is not a market that Aparna are looking at. They are aiming for large analytics farms, NFV for Telcos and IoT edge compute.

Posted in General | Comments Off on Vendor Briefing Aparna

Demitasse Interview – Josh Atwell

I recorded two interviews at the Australian VMUG UserCon in Melbourne, back in March. It is way overdue for me to post these. The first was with Josh Atwell who has been a good friend of mine since the start of the vBrownBag days. I asked Josh about what it was like for him to meet customers in Australia and whether they had different things to say and ask.

I think I asked a way too serious question. Josh is hilarious to hang around with. At the 2016 San Diego UserCon he didn’t have the slide deck he wanted to present. So he taught us about Bourbon and drew quite a crowd.

Posted in General | Comments Off on Demitasse Interview – Josh Atwell

Writing in April

April is already over, that is a bit hard to believe yet I know why it has passed so fast. It was a very busy month. I spent the first week in Houston with HPE. We ran the first vBrownBag Build Day with their HC380 hyperconverged platform. We showed you the end user customer experience of deploying the HC380 and migrating an existing workload onto the platform.

On TechTarget, I had a huge amount published. I wrote about the need for operations teams to understand container technologies. I also wrote a procedural article about deploying your first vSphere Integrated Containers environment.

And shared some thoughts on what the AWS S3 issues mean for DR products that use cloud services. As well as considering how using DRaaS may have unexpected costs if you haven’t considered some consequences of using this model.

I also looked some more at policy-based management, which I think will be a standard practice in a few years.

The Buyer’s Guide to VDI Management and Monitoring is being published, articles on what features to expect and how to evaluate products. I also wrote about the complexity of upgrading a VDI environment.

A new and fun format was a quick guide to setting up a basic lab for learning DevOps tools and methods. I will be interested to see how often my GitHub repo gets cloned by people following along.

Over on TVP, I wrote about the different way that hyperscalers operate compared to Enterprise IT. I also expanded on my thoughts about serverless on-premises, it really is only one aspect of developer enablement and not sufficient by itself. Another thought that I have had for a while is that the biggest benefits of Hyperconverged aren’t really from clustering local storage inside hypervisor hosts. The real benefits of HCI can often be had without using an HCI product. This article is about the physical aspects, the next one will be about the policy-based management that I am so keen on.

Posted in General | Comments Off on Writing in April

Vendor Briefing Pivot3

I like it when businesses deliver what they say they will deliver. When I talked to Pivot3 last year they had just integrated the NexGen storage with their vStack HCI product. Their plan was to take the hybrid flash and storage QoS goodness out of NexGen and build it into the HCI. This week they delivered a new HCI called Acuity and new node types (X5) that have just that. The new nodes are available in all-flash or hybrid configurations. Either type can have NVMe flash as an accelerator and come in a variety of storage and compute capacities. These new nodes run the Acuity HCI platform which has the same storage QoS policies we saw from NexGen. The combination of NVMe and QoS allow higher VM densities without compromising performance for high priority VMs. Customers who liked Pivot3 as a point solution will now be able to adopt it as a general-purpose virtualization platform. Even for tier-1 applications that need storage performance guarantees. Pivot3 has come a very long way from their origins with storing video surveillance files.

One thing I find very interesting is the rediscovery that extreme storage performance can unlock VM density. Pivot3 are talking about 2-3 time the VM density. VMs spend less time waiting for IO to complete, so applications respond faster. Also, that guest VM swapping doesn’t cause application performance to suffer as much when the storage is faster. Putting NVMe flash under a bunch of VMs is like putting an SSD in your laptop, everything just feels better. I will be very interested in what happens when solid state storage moves to the RAM bus. Storage class memory can be two orders of magnitude faster than NVMe flash.

Posted in General | Comments Off on Vendor Briefing Pivot3

Vendor Briefing: Cohesity

This month Cohesity had a double announcement, a fresh round of funding and a new release of their platform. The funding is round C and at US$90M more than doubles their total funding. One interesting aspect is having HP Pathfinder & Cisco Investments join for this round. The other aspect is that it has been two years since the last round of funding, suggesting that the cash burn rate isn’t too high. OK, enough about the funding, what is new with the product? First off there are new capabilities. Cohesity is not just about backup, an S3 object storage option adds to the existing NFS and SMB file server capability. The file server gets quotas and the ability to have a WORM (Write Once, Read Many) configuration where files are immutable. The underlying storage system now has erasure coding for more space efficiency. There are 2+1 and 5+2 schemes for erasure coding, both yield more usable space. The 5+2 scheme will tolerate two concurrent failures, such as a drive failure while you are rebuilding from a node failure. Another welcome feature is RBAC. Either local accounts or your Active Directory is used for authentication. There are built-in roles and you can create your own, then assign roles on objects for users or groups. There are two features that are all about backup. One is the ability to backup NAS devices, initially NetApp filers. The second is the ability to assign a protection policy to a vCenter folder or tag. Then any VM in the folder or with the tag will get the protection policy applied. I like inheriting protection from vCenter inventory, it is one less thing to need to manage.

Posted in General | Comments Off on Vendor Briefing: Cohesity

March Writing

There is often a lag between me submitting articles for publication and the time they arrive on the website. There wasn’t a lot to report on February’s publications but I was very busy writing. This month a few of those came out on TechTarget.

I took a look at when it makes sense to use VDI to deploy new applications, and when you should install them on the user’s device.

I have just started writing for SearchDataBackup and had a few thoughts about security considerations for cloud backups. In many ways, I view cloud security as largely an extension of your normal security policy. Often cloud solutions are used without so much thought.

Continuing my writings about VMware’s open source strategy I took a look at the Senlin cluster management project and what it means to have VMware Integrated OpenStack support Senlin.

I’m writing a VDI Monitoring and Management buyer’s guide for SearchVirtualDesktop. The guide comes out as a series of segments, the first one published is about what VDI monitoring and management means. There will be another four parts to the series over the coming months.

The final article on TechTarget was a SearchDataBackup article about hyperconverged inspired backup, a look at what Cohesity and Rubrik have to offer. I was glad that I had seen both at TechField Day and had briefings from both as they progressed product development.

Over on the Virtualization Practise, I took a contrarian view of the availability of cloud services, suggesting that failure is often a viable business decision.

I have also been thinking about the changed Microsoft. We can all agree that Microsoft has done a lot of things in the last two years that would have been inconceivable five years ago. I think that the key reason is that they are fully committed to Azure as the future of the company.

Another grumpy old man post is about the fad of saying that every company needs to be a software company. The statement annoys me, software is a tool of the business, any like any tool will be used where it is valuable.

Finally, for March, I keep thinking about what the future of IT looks like and whether Hyperconverged is the future or just a phase along the way. I am inclined to think that HCI vendors will keep adding capabilities that eat away at IT administration tasks.

April is going to be another very busy month. Tomorrow I’m flying out to Houston to spend a week with HPE. We are going to create the first vBrownBag Build Day. The aim is to show you the process of deploying a product, in this case the HC380 Hyperconverged platform, as if you were doing the job yourself. The concept stems from my experience of working at an integrator and being sent to deploy products with minimal training on the product. We will be showing the process and covering the product architecture and its context in your datacenter. I also think that education is an important part of the sales process, technical staff want to understand products before the agree to deploy.

Posted in General | Comments Off on March Writing

Vendor Briefing – Runecast

I have written (ranted) before about how I like management tools that are focused on helping me do my job, rather than showing me how much data they gather. I get the feeling that the team at Runecast have the same desire for management tools. I know they built the tool that they wanted in their previous life looking after a lot of production vSphere

The basic premise for Runecast is simple. Most vSphere problems are caused by issues that are already known. The issues are usually documented in the VMware knowledgebase. Usually, the KB has a resolution or mitigation recommendation. If customers knew every KB and checked their environment for every issue, then there would be less unplanned downtime. What is needed is a tool that automatically does the analysis. Enter Runecast.

The little virtual appliance connects to your vCenter and receives logs from your ESXi servers. The appliance then analyses the configuration and log data to see whether there are any known issues. Along with the VMware Knowledgebase, the appliance also knows the security hardening guides and best practices around vSphere configuration. A dashboard displays any latent issues and contains information about the remediation actions. Log scanning happens in real time and configurations on a periodic scan basis, with a configurable schedule. All analysis is completed on-premises, within the appliance. The appliance will need to be updated as VMware release new and updated KB articles. Updates can be scheduled, or manually initiated. Updates can even be downloaded and brought to a dark site for secured vSphere deployments.

I’m very impressed with Runecast. With very little effort you can be advised of latent issues in your environment and able to prevent downtime, rather than respond to failures. There is a free trial, so you can see how you stand today and prove out the value Runecast brings. It looks very easy to deploy and start using. They also have some great plans for future development.

Posted in General | Comments Off on Vendor Briefing – Runecast