Perhaps a better example for the consciousness post would have been to allow the application on the operating system to have access to the source code for the hypervisor two levels down. That way, the app could decide to change the virtualization of CPUs or the contention algorithms on the virtualized network interfaces and bootstrap itself a new hypervisor to host its own OS.
The question naturally arises, what scope has an app - or an OS - got for detecting that it is on a hypervisor rather than real hardware? If the emulation is indistinguishable, you cannot tell - by definition. At which point the emulated thing and the thing emulated have become indistinguishable. At which point you have artificially re-created that thing.
This is all well worn territory in the strong Vs weak AI conversations of course.
My favorite way of thinking about it is this:
1 - we don't understand consciousness and thus we cannot be sure we won't re-create it by happenstance, as we muck about with making computers act more intelligently.
2 - If we do create it, we likely won't know how we did it (especially since it is likely to be a gradual, multi-step thing rather than a big-bang thing)
3 - because we won't know what we did to create it, we won't know how to undo it or switch it off
4 - if it improves by iteration and it iterates a lot faster in silicon than we do in carbon, we could find ourselves a distant second in the earth-based intelligence ranking table rather quickly :-)
Best if we run all the electrical generation on the planet with analog methods and vacuum tubes so that we can at least starve it of electricity if push comes to shove:-)