One thing that interests me is the "inductive leap" implicit in all modeling situations. You have empirical risk minimization, Bayesian, MML, MDL and others all taking a different approach to taking the leap. You also have the "mind projection" fallacy Jaynes liked to talk about.
Understanding what modelers do is a good way to understand what AI will be doing.
Understanding what modelers do is a good way to understand what AI will be doing.