When reporting algorithmic complexity of an algorithm, one assumes the underlyin
ID: 647990 • Letter: W
Question
When reporting algorithmic complexity of an algorithm, one assumes the underlying computations are performed on some abstract machine (e.g. RAM) that approximates a modern CPU. Such models allow us to report time and space complexity of algorithms. Now, with the spread out of GPGPUs, one wonders whether there are well known models where one can take into account power consumption as well.
GPUs are well known to consume considerable amount of power and certain instructions fall into different categories of power consumption based on their complexity and location on the sophisticated chip. Hence instructions, from an energy of view, are not of unit (or even fixed) cost. A trivial extension would be assigning weights to operation cost, but I'm looking for a powerful model where an operation/instruction might cost non-constant units of energy, e.g. polynomial amount (or even more complex e.g.: function of time elapsed since start of the algorithm; or taking into account probability of failure of cooling system, which will heat up the chips, and slow down the clock frequency etc.)
Are there such models where non-trivial costs and faults can be incorporated?
Explanation / Answer
There isn't an established model yet, but this is an area of active research right now. One of the experts on the algorithms side of things is Kirk Pruhs. His papers have more information, and you can also browse this presentation.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.