The whole point of a software-defined data center, some have said, is to introduce resource automation based on the need for those resources at present, not some far-off future date. If efficiency is the goal, why provision more resources than your applications actually require? Moreover, why make a big pool of hardware just so you can subdivide it into clusters that are then sub-subdivided into segments? This was HPE’s hyperconvergence stance in the beginning. Indeed, HPE consulted with Docker Inc. with the intent of treating containerized workloads and virtual machine-based workloads equivalently.
What's happened since then?