I was reading an article regarding Chargebacks from Dennis Drogseth from EMA: http://www.apmdigest.com/the-three-faces-of-chargeback the other day, who’s talked very well about the interesting variables, therein, which made me think about how some of these have been addressed in my own history, and where they relate currently.
In my own history, I’ve implemented systems for customers that have had incredibly detailed systems, charging back for each detail, MHz rating, Mb of RAM, and Gb of storage used, even so much as Kilowatt Hour used per machine.
I’ve had customers who’ve had such a difficult time with this single aspect of chargeback, that they’ve simply given up, and turned all of IT into a cost center. No attempt at chargeback was ever made. No physical asset, virtual asset, bet they network, storage, processor, labor, etc. was ever charged back. In this way, the entire organization was willing to accept that the IT staff, and budget was a given, and all projects were assumed by all departments, divided out equally, and thus, simply granted.
Another customer had such a difficult time with this process that they charged a flat fee for a server asset. This fee had nothing to do with the server’s configuration. The number of processors/cores, amount of memory, and the amount of storage consumed. Nor did it have anything to do with whether the machine was physical or virtual.
This, of course, gets more difficult in an age of virtualization, and progressively more difficult as workloads migrate to the cloud. How do we measure these dollar amounts in an age of overcommitments against physical assets? Even more to the point, how do we do this when a workload can be dynamically migrated from location to location? The chargeback model becomes very difficult to assign. We have a difficult time even assigning some level of labor cost to the asset.
So where do we go, and how do we accommodate for the dynamic nature of the cloud? As I’m currently focused on storage, I’d like to take that as my use-case. When a VM is cloudy, it consumes storage in each location in which it is supposed to reside. The formula, therefore, should be easy. Even if we employ deduping, or compression to save on drive space, disc is still a constant in as many locations as there are homes for the machines.
With regards to Processor and Memory, though, these metrics are harder to assign. I’ve yet to arrive at a tool that will accurately assess this in a cloud age.