I spent the last 3 days in a conference about genetic programming. This made me think even more than I usually do about one of my favorite topics in artificial intelligence: priors over programs. Usually my thoughts about this topic are attempts at coming up with practical solutions to problems in AI, but this time the direction is more metaphysical.Quantum mechanics seems to be quite strange. I certainly don't know much about it, and I don't want to be one of those philosophers that talk about things they don't really understand, quantum mechanics of course being one of the classic domains where this happens. Nevertheless, the ideas of multiple world states existing at the same time and affecting the development of each other are quite unintuitive to me. I am certainly not suggesting flaws in quantum mechanics as a model. I don't think this unintuitiveness is anything other than a flaw in me, not the model, and I am curious to understand this flaw. Why does a theory which is, as far as I know, the best one that we have so far for the universe (is that even true? Are there newer theories which are now generally accepted as better?), strike me as "wrong" on an emotional level? That seems easy to explain: we humans evolved in the macroscopic world which is well-approximated by Newtonian physics (and perhaps even simpler models), so we never needed to evolve intuitions about quantum mechanics as the Newtonian approximation gives a far better trade-off in terms of computational gain (probably huge) vs. information loss (totally negligible). This explanation is not perfect since it probably makes some assumptions on our brains as computers and what kind of computation is easy to evolve and so on, but I think it's good enough for now anyway. Suppose we are living in a simulation. Then the rules we infer about the world may be sort of a projection of the actual rules underlying the simulation on our limited perception as simulated beings. If we were to design a simulation, we would probably set the rules of the simulated world according to our existing computational models and the types of computations that do well (easily, efficiently), not a perfect imitation of our own physics, although I suppose we would probably be inspired by it or try to approximate it. The main result, anyway, is that the physics of our simulated world would not match the physics of our own world. They are likely to be biased towards it but unlikely to perfect match. Similarly, I don't expect the physics of our own world (presumably something resembling quantum mechanics) to be a perfect copy of the physics of the world of our creators but rather a reflection of their computational models and computational abilities. Surely, if we are living in a simulation then our world is meta-computable, right? That is, computable by our creators, not by us. This line of thought goes beyond mere computability, though. The simulation is meta-computable by definition. That doesn't tell us much if anything. But since a simulation reflects existing computational models and abilities, can we infer something about the programs our creators tend to write? By "program" I refer to both software and hardware. I don't want to use the term "human computation" because that can be confused with the kind of computations done by our brains, while I actually refer to the kind of computations done by machines that we create. Human programs have a strong structure. They are certainly not randomly generated Turing machines. We tend to write programs that are serial, deterministic and modular. We can surely add other properties to that list. Maybe scalable and human-readable should go there too. Anyway, that would mean that the physical laws of a simulated world that we create will probably also be serial, deterministic and so on, quite unlike the physics of our own world. In the last paragraph I went from an imprecisely-defined generative model over human programs to an imprecisely-defined generative model over laws of physics. Can we do the reverse? Can we learn a generative model over programs given evidence of the "physical laws" of their computations? As an AI researcher, the answer I am looking for to that question is a big "yes". For a machine to learn a program, a strong inductive bias is very useful, and this bias must come from somewhere. It doesn't have to be the computations of human programs, but it is one possibility. Finally I come to my main point. If we can infer something about our own computational models and abilities from the computations done by human programs, can we infer something about those of our creators from the laws of physics of our world? What kind of prior over programs would generate physics like quantum mechanics and string theory? To make this at least a tiny bit practical: if we can gain some information about the computations done by our creators by our partial understanding of our own physics, can we use that partial information to build a prior over physics itself, and infer further physics using that prior? I don't think that what I'm proposing is ridiculous. I think this is very similar to Solomonoff Induction, where we complete our physics by integrating over all simulators that are consistent with our partial physics, with some prior. The problem with this idea is not only that Solomonoff Induction is incomputable ("a minor setback" as someone in the conference remarked) but also that the simulators I am talking about are not Turing machines and do not exist in our own computational world but in that of our creators. Solomonoff Induction is hard, but this "Metaphysical Induction" is incomputably harder.