#vBrownBag TechTalks at vForum

sydney-harbour-bridge-opera-house-1671-1

For the fourth year in a row there will be community presentations at vForum in Sydney. vForum is October 21 & 22, unfortunately I won’t be there as I have other travel commitments. Luckily Craig Waters is carrying the community torch and making sure TechTalks happen. You can find details of the TechTalks at vForum Sydney here along with the sign up link.

Posted in General | Comments Off on #vBrownBag TechTalks at vForum

Inline dedupe for performance as well as capacity

shutterstock_3Blocks.jpg

I’ve spent a lot of time talking about the results of a deduplicated array for storing VMs. I think it’s time to look at some differences between deduplicated arrays.  Deduplication of storage has been around for a few years. Naturally there are different ways of deduplicating storage, each with positives and negatives. One of the first questions to ask about deduplicated arrays is: inline or post processed? Another is the scope of the deduplication? Another is where in the data path the deduplication is done?

Continue reading

Posted in General | Comments Off on Inline dedupe for performance as well as capacity

Kill it with fire, it is a disaster.

Ask your friends who don’t work in IT infrastructure if they think what we do is exciting. If you’re like me you aren’t friends with this type of muggle. So try asking the parents of your kids friends. I’m pretty sure that they would rate our working world of IT infrastructure as dull. So it’s nice when we get to see something a little more exciting, like destroying things with thermite. A few of these normal people will be Mythbusters fans and may have seen burning Thermite cut a car in half and explode on contact with ice. But have they seen a storage device destroyed with Thermite? We have, first DeepStorage destroyed a node in a Gridstore cluster and last month Datto destroyed a NAS. OK, I know it’s no James Bond stunt but it is better than a PowerPoint presentation, even a little better than a whiteboard.

shutterstock_Thermite

Continue reading

Posted in General | Comments Off on Kill it with fire, it is a disaster.

Challenges of Deduplicated Storage

shutterstock_TrySomethingNew
I am continuing my series of blog posts that are inspired by listening to SimpliVity’s story while I was on a speaking tour with them. Almost everything in these post is generically about deduplicated primary storage rather than specifically about SimpliVity. So far I’ve mostly looked at the upside of deduplicated storage, it’s about time I looked at some challenges. Some design challenges are around metadata management and write latency. There are also operational challenges around the meaning of available capacity. And challenges when you migrate off a deduplicated storage platform that is full. Continue reading

Posted in General | Comments Off on Challenges of Deduplicated Storage

The Case for Camels

shutterstock_Camelshutterstock_Suitcase-Blue

When it comes to efficiently conveying information we often use shortened forms of words or phrases. Stay with me. Sometimes we need to name an object without using spaces. As an example, I might create a VM that is my “production web server number 3 in my Auckland datacentre”. That name is rather long. Since VMs are often stored in folders with the same name there could be a nasty folder name. Naming that VM in a shorter form would make a lot of sense, maybe prodweb03akl. Now I have a short folder name with no white space. So far I’m sure you are all on the same page and wondering what my point is.

Continue reading

Posted in General | 2 Comments

Array Offloaded VM Cloning

I’m continuing with my thoughts about deduplicated storage for VMs and how it is different and requires different thinking. Today I will look at cloning VMs and some thoughts. In particular how other data efficient cloning technologies work compared to deduplication.

Continue reading

Posted in General | Comments Off on Array Offloaded VM Cloning

If You Can’t Measure It, You Can’t Manage It!

shutterstock_Caliper

The above quote is attributed to both Peter Drucker and Edward Demming although it appears that neither actually said it. Even so it is a driver for a lot of what operations teams do in IT. Measure and record everything in case we need the information to manage something later. A whole swathe of management products exist to gather measurements into large piles of data and let you see what has happened. Most of these products will let you know when something happened, whether it’s a breach of a static threshold or a departure from “normal” status. This sort of massive data crunching is what computers are great for, a lot of data in and a little analytics out. Then a human must decide what to do with the analyzed data. These products are an open loop, they just measure and report.

Continue reading

Posted in General | 1 Comment

Deduplicated VM Cloning and Backup

I mentioned earlier that I’d been thinking a lot about the consequences of using a deduplicated store for holding live VMs. This thinking came from hearing the SimpliVity pitch quite a few times and also hearing the questions that came from the audience. To be clear this isn’t something that has to be unique to SimpliVity and SimpliVity did not ask me to write this post.

shutterstock_Clones

Continue reading

Posted in General | 2 Comments

Hyperconverged like you just don’t care

Living in a small city, in a small country, I see a lot of small businesses. These customers will never have a virtualisation specialist on their staff, they barely have an IT specialist. These same customers find that their IT needs have grown beyond a single Windows server and some desktops. Some sort of virtualisation is going to be important for these customers, just as it is for enterprise customers.

shutterstock_Scales

Continue reading

Posted in General | 1 Comment

Not quite hyperconverged Backup

Why do backup products not come integrated into the storage that they require? Pretty much every backup product that you deploy wants a disk target to write to. That disk target has an x86 CPU in it and the backup software could run on that CPU. This is one thing Rubrik have done. Their product is a hybrid storage array that also runs their backup software. This backup array is a deduplicated scale out storage appliance, using SuperMicro Twin hardware it puts a four node cluster in 2U. A basic 3 node configuration should backup around 200 VMs, about 100TB of backup capacity. For scale more of these units can be added, apparently without practical limits to how many units.

Continue reading

Posted in General | 2 Comments