I haven’t mentioned my other site, Notes4Engineers for quite a while. Partly because I haven’t posted anything there for nearly a year. I have a few videos that I made with SimpliVity that cover some of teh ways that hardware failure can affect a SimpliVity cluster. This first video covers how SSD and HDD failure affects a cluster.
You can find this video here on Notes4Engineers.
<SpoilerAlert> The use of hardware RAID means that there is no outage even with multiple concurrent failures</SpoilerAlert>
I’ve written quite a bit about deduplicated storage for VMs over the last few months. It’s almost the only thing I’ve written about here. So far I have mostly talked in general about dedupe and its impact on VM storage. Now I want to share something I recently found out about SimpliVity’s deduplicated VM storage. It is fundamental to how SimpliVity gets operational efficiency from their data efficiency.
Disclaimer: While I am regularly doing paid work for SimpliVity, they have not requested or reviewed this article.
Nested virtualization is an awesome way to learn about virtualization and
test features or processes. My AutoLab even makes the nested build process relatively
simple. But there are a few things that are impossible to test without real
servers. One feature missing from nested ESXi servers is an IPMI interface. IPMI
is a standard for out of band hardware management. Having IPMI on a nested ESXi
VM would enable testing of IPMI scripting as well as DPM, Distributed Power
Management, which is otherwise impossible. It would also resolve some of the
challenges making the Nutanix Community Edition work in nested virtualization.
Tech Field Day can be an interesting event, not just from the technology side. One of the early questions for Violin Memory was basically: How are you still in business? The answer was that they still have cash and are selling product. The core of their presentation is that this is not the Violin you remember from a couple of years ago. So I’d suggest you take a moment to let go of what you know about Violin. Forget that they were a one product company. Forget that they had a speed demon all flash array with no data services. Forget that their IPO was a disaster. How are you doing? All forgotten? Let’s take a look at the new Violin.
For the fourth year in a row there will be community presentations at vForum in Sydney. vForum is October 21 & 22, unfortunately I won’t be there as I have other travel commitments. Luckily Craig Waters is carrying the community torch and making sure TechTalks happen. You can find details of the TechTalks at vForum Sydney here along with the sign up link.
I’ve spent a lot of time talking about the results of a deduplicated array for storing VMs. I think it’s time to look at some differences between deduplicated arrays. Deduplication of storage has been around for a few years. Naturally there are different ways of deduplicating storage, each with positives and negatives. One of the first questions to ask about deduplicated arrays is: inline or post processed? Another is the scope of the deduplication? Another is where in the data path the deduplication is done?
Ask your friends who don’t work in IT infrastructure if they think what we do is exciting. If you’re like me you aren’t friends with this type of muggle. So try asking the parents of your kids friends. I’m pretty sure that they would rate our working world of IT infrastructure as dull. So it’s nice when we get to see something a little more exciting, like destroying things with thermite. A few of these normal people will be Mythbusters fans and may have seen burning Thermite cut a car in half and explode on contact with ice. But have they seen a storage device destroyed with Thermite? We have, first DeepStorage destroyed a node in a Gridstore cluster and last month Datto destroyed a NAS. OK, I know it’s no James Bond stunt but it is better than a PowerPoint presentation, even a little better than a whiteboard.
I am continuing my series of blog posts that are inspired by listening to SimpliVity’s story while I was on a speaking tour with them. Almost everything in these post is generically about deduplicated primary storage rather than specifically about SimpliVity. So far I’ve mostly looked at the upside of deduplicated storage, it’s about time I looked at some challenges. Some design challenges are around metadata management and write latency. There are also operational challenges around the meaning of available capacity. And challenges when you migrate off a deduplicated storage platform that is full. Continue reading
When it comes to efficiently conveying information we often use shortened forms of words or phrases. Stay with me. Sometimes we need to name an object without using spaces. As an example, I might create a VM that is my “production web server number 3 in my Auckland datacentre”. That name is rather long. Since VMs are often stored in folders with the same name there could be a nasty folder name. Naming that VM in a shorter form would make a lot of sense, maybe prodweb03akl. Now I have a short folder name with no white space. So far I’m sure you are all on the same page and wondering what my point is.
I’m continuing with my thoughts about deduplicated storage for VMs and how it is different and requires different thinking. Today I will look at cloning VMs and some thoughts. In particular how other data efficient cloning technologies work compared to deduplication.