For this discussion I want to focus on predictability. This term is used not only in reference to performance but also for costs. Public cloud providers have calculators that anyone can use to plug in compute and storage numbers and get a cost of running that virtual machine in the cloud. Multiple companies use these calculators to make decisions on what workloads they will migrate to a cloud service and what storage tier(s) their data will live on.
Countless time is spent testing and migrating these workloads and then, the first monthly bill roles in. Bam! They see that first bill and jaws hit the floor. After reviewing the bill line by line trying to figure out why their calculations were so off they see a line item labeled “data transfers” that is 25% of the overall bill.
External ancillary resource costs
What they failed to realize and take into account with initial cost calculations is that there are ancillary resource costs outside of compute and storage such as network usage that they failed to account for.
Public clouds will charge for outbound data transfers and data transfers between regions or even different network segments. Users accessing servers and applications in the cloud and even data traffic between a cloud hosted and on premise server can drive up the monthly cost leaving the organization perplexed as to why the cost was much higher than predicted.
This exact scenario is why an understanding of the applications are so important.