Since the invention of the integrated circuite compute performance of CPUs has been driven mainly through the decrease of the distance between the transistors and increase of frequency.
Meanwhile both processes came to a natural end as molecules have a specific size which can’t be reduced anymore using the known technologies. The remaining option to further increase computing speed is to use multiple compute cores in parallel. This is, why we see all the 4, 6 or even 12 core CPUs.
However, the dominating performance driver on core level remains frequency. In other words: The amount of steps a system can perform within a certain amount of time. Especially in a virtual context the frequency is a relevant aspect:
Assuming you have a 3.0 GHz CPU you use to run two virtual machines. This leaves 1.5 GHz per machine, each owning one virtual core. If you would provision a third machine, this would reduce the remaining maximum to 1.0 GHz per virtual machine.
Unfortunately the provided GHz are not known to the VM - despite a few providers allowing their customer not only to configure vCPUs but MHz - which makes it difficult for the user to understand real VM performance purchased.
To overcome this obstacle, we developed, based on the idea of Roldan Pozo and Bruce Miller (both NIST), a mathematical experiment of which we know in advance the number of compute cycles we will need to resolve it. This allows to determine the number of operations that have been executed in a set amount of time. In a nutshell this is what ASC is about: A measure of compute power, strongly correlated to the frequency available.