My friend Michael Webster wrote a great article about the current possibilities, limitations and issues around presenting a disk device over 2TB to a virtual machine on vSphere. He also gave great guidance about how you might work with the current limitations to achieve unusual requirements. I have one thing to add:
I don’t want technical limitations in the hypervisor to limit my design choices.
Let’s leave aside the questions of whether it’s a sensible thing to have massive disks and the design implications from having these disks. As with any design decision there are implications here (Michael discussed a few).
The progressive improvements in vSphere over each release have removed a lot of the limitations that constrained my designs in the past. Massively multi-CPU VMs with lots of memory and huge IO capabilities. Large and resilient host clusters, storage migrations and clusters. These changes have meant that the technical limitations of the hypervisor are far beyond what I need to fulfil almost all x86 workload requirements. The one place I feel I may still hit hypervisor limits is the 2TB disk limit, needing to use the workarounds leaves me unsatisfied.
I imagine there is a team of VMware developers hard at work making disks greater than 2TB possible for VMs, I hope they are given every resource they require. I look forward to the day when VM disk size is not limited by the hypervisor but by other design requirements.
What hypervisor limits affect your designs?
© 2012, Demitasse. All rights reserved. This post first appeared on the Demitasse blog www.demitasse.co.nz