It’s publishing season. Like the buses, nothing comes along for a while, and then all the papers are published at once. Another grand effort that has been 3 years in the making (since I started at the University of Sydney), the Journal of the Royal Society Interface this week released our manuscript on automated calibration of complex biological simulations. The original idea was formed near the end of my PhD, and is part of a larger theme on how to engineer accurate and representative simulations of biological systems that are incompletely understood: how do you simulate something if you don’t know how it works?
The answer in part rests on “calibration“, a process whereby you adjust the simulation such that its output matches that of known reality. Typically this involves finding parameter values, where the biological correlate is unknown. There are any number of approaches for doing this; the trick for complex system simulations is how you measure that difference between simulation and reality. Being complex, these systems cannot be well characterised in single observations or metrics alone, and that immediately blows standard techniques out of the water. An example, in this paper we employ the “ARTIMMUS” simulation as a test case, it simulates mouse multiple sclerosis. There are four chief T cell populations involved in the disease and subsequent recovery stages of this disease, each growing in population size, peaking, and falling again in a unique manner. They are all critical, and you can’t calibrate on the basis of one alone. Now that’s quite lot level, why not just characterise “disease severity” instead? Well, for starters “disease” is a very emergent property that is hard to replicate in simulation, we tend to deal with more concrete measures that can be tied to specific phenomenon. Take multiple sclerosis, there’s any number of ways that different areas of the nervous system can be impacted to deliver a given degree of debilitation.
Enter our approach. We use multi-objective optimisation to simultaneously evaluate several metrics of simulation’s capture of important biological features simultaneously, and evaluate find appropriate parameter values accordingly. We name this technique Multi-Objective Calibration (MOC). MOC exposes where several parameters may trade off against one another to deliver a given simulation dynamic, or where certain aspects of the simulation dynamic are maximised at the expense of others. This raises another intriguing possibility: what does it mean if no parameter values can be found that simultaneously align all aspects of a simulation’s output with that of reality? We propose that this points to a simulation that fails to adequately capture the complexities of the biological components; the changes that need to be made are not parameter values, but rather what those parameters represent. Perhaps an important cell population or component in the biology is missing from the simulation, or has been incorrectly captured. In this manner, we propose that MOC can play a vital role in guiding simulation design and development, not only parameter tuning at the end.