If you haven’t read Nick Bostrom’s simulation argument, read it first. I’ll wait. Done?
Now, for some unfounded speculation:
1) How would you trick the scientists?
Nick proposes two ways to fake the environmental details of the simulation: 1) calculate the details on-demand and 2) mess with the agent’s minds to hide glitches.
To me, this sounds problematic. An intelligent agent to inspect individual minds in the simulation seems amateurish to me. If you were interested in the agents’ behavior, such manipulation would bias your results, and if you were not interested, there would be no point in manipulating them. If you did not manipulate any minds, how would you build the simulation to make it glitch-proof? How could you guarantee that if the agent looked at any detail of the simulation, it could generate detail on demand while maintaining narrative consistency?
For example: Let’s say two agents looked at neighboring regions of space — whether through a microscope or a telescope does not matter. The details would be rendered as they went along. But what happens when their patches of detail intersect? They need to appear consistent, as if they were “there all along.” But if the details are generated algorithmically on demand, how could that be ensured? You would either have to structure the mathematical model to make all such merges consistent (which seems impossible), or to make the inconsistency a part of the fabric of reality so as to make it seem “normal.” (Quantum weirdness?)
Another option: if the universe is finite, you could model it entirely. Perhaps your model would be could simulate “chaotic” (non-biological) events at a high level so that only the environment of living beings would need high detail. For example, if a human being never sees a supernova in galaxy NGC 2770, there is no need to “remember” exactly how it went on.
2) What would you want to discover?
Here is another possibility: perhaps there is no intent to deceive or even to harbor intelligence. Perhaps the operator is a physicist modeling potential universes in an attempt to solve the problem of heat death, and intelligence is just an accidental behavior of the system. He couldn’t care less about whether the intelligences realize that they are in a simulation or not.
Here is an interesting empirical question: could we discover anything to indicate the computational nature of the universe? So far, it seems not, as the universe seems analog (continuous rather than quantized). But on the other hand, perhaps quantum mechanics is a very weird science as a consequence of its simulated nature, and we are just not aware of the computational implications yet. Or, perhaps the simulation is analog. Looking for physical laws that imply an underlying computational substrate could be worthwhile.
3) What factors would you alter?
Let’s speculate about the reasons a posthuman operator might have to build the simulation. Presumably, he would not merely repeat the same scenario: he would alter various “seed” elements to see how they affected the outcome. One obvious candidate would be the laws of physics. What might be the goal? Perhaps he wants to model a universe that is most suitable to life, or to a particularly creative form of life. Perhaps he wants to model new intelligences to see whether they are productive or destructive before creating them in vivo.
Suppose that most posthuman operators want to create a simulated universe that is more harmonious (however they define it) than their own universe. We might imagine an iterated chain of such simulated universes, where each attempts to better the ones that create it. Perhaps that becomes the ultimate goal of every new universe: to develop beings who will go on to create a simulation that is better (less entropic, creative, happy, long lasting, etc.) than one that created it. Shortly after the singularity, the entire universe is converted into computational substrate for the next simulation.
4) What’s the ratio of humans to post humans?
The last scenario could offer a mathematical explanation for the Doomsday argument: the majority of intelligences are primitive mortals because shortly after the singularity, the universe tends to be converted into a population of operators who create more primitive simulations.
Let’s suppose that all the living agents of every simulation become a fixed population of immortal operators who create yet more primitives, and so on. What is the ratio of operators to primitives? Whether each immortal operator spends his entire time “managing” one universe or an infinity of new ones, you could have an infinite number of operators and still have even more primitives. And this could be true regardless of whether the operator reproduces as long as his offspring also spend their time building simulations that in turn create their own simulations.
The above scenario sounds pretty far-fetched. But it’s also unlikely to that each young civilization is somehow destroyed before the singularity, and yet we find ourselves as the very unlikely citizens of a young civilization. To me, the idea that every post human civilization would bypass “inefficient” experimentation in reality and create a “more efficient” simulation to discover whatever truths it is after is appealing.
What if the beings running the simulation were interested in the agents’ behavior, but were unable to craft the simulation in such a way that manipulation was unnecessary? In this case, even though they know manipulation will confound their results, it is the best they can do with their current level of technology.
Regarding “detail on demand”, it seems plausible to me that since time for agents within the simulation is just another part of the simulation, “detail on demand” wouldn’t necessarily be a problem at all. Couldn’t the simulation just “pause” all of the agents thoughts, actions, etc., compute the necessary experience(s) and then resume? I suppose sort of like buffering? If you were the video itself and not the one watching the video, you wouldn’t experience the pause because you would be paused yourself.