Simple Example of time based detection of hypervisors
Well the theory behind this is simple lol. Time the execution of a series of instructions in a statistically diverse range of test cases under rest conditions, and then do this in a few hyper-visors. When you analyze the data from the hyper-visors, you will notice large peaks that happen intermittently. Find the average of those peaks, then find out the frequency of these in the normal operating environment.
The code below does just that, although i was limited in the number of test cases as i did this on a single machine lol. Also, please note that in order to implement this into something more robust, you need to add code that detects the base state of the machine it is on before it tests for the existence of a hyper-visor, as this will throw LOTS of false positives under heavy load. This is common with any kind of time based detection method.