As well as developing and maintaining metabolic reconstructions of yeast, E.coli and Cho cells, Pedro Mendes and I created an algorithm [1] for turning these static networks into dynamic kinetic models. We include any known parameters, and guesstimate the remainder.
But how — the (frequentist) statisticians amongst you may well ask — can we create a model with thousands of parameters, when we only have measurements for tens? There are two justifications for this. Firstly, dynamics are mostly driven by the structure (which we know), rather than the parameters (which we guess). Secondly, on a more philosophical level …
… is it better to model the well-characterised pathway in isolation, or as part of a worse-characterised wrapper?
The real issue with these big metabolic models, with their thousands of variables, is actually numerics. Yunjiao Wang and I found [2] that, even for the simplest linear pathway model, we should expect instability, stiffness and even chaos as the chain increases in length. In our full model of metabolism, the stiffness ratio was found to be around 1011. Given the different timescales over which metabolism operates, this is not so surprising. But it means that today’s systems biology tools just cannot robustly simulate such models: of this size, and with these numerical instabilities.
Whole-cell models are of increasing importance to the community. Jonathan Karr’s model of M. genitalium [3, also the focus of a recent summer school] avoids the issue of dynamically modelling metabolism through using FBA. We would, of course, much prefer the use of full kinetic models. But, right now, these models are not usable, and they are not useful. The onus to address their utility shouldn’t be on the software developers; instead, we modellers need to look into alternative ways to represent metabolism that will allow for its dynamic simulation at the whole-cell level.
References
- Kieran Smallbone and Pedro Mendes (2013) “Large-scale metabolic models: From reconstruction to differential equations” Industrial Biotechnology 9:179-184.
doi:10.1089/ind.2013.0003 - Kieran Smallbone and Yunjiao Wang (2015) “Mathematising biology” MIMS EPrints 2015.21
- Jonathan Karr and friends (2012) “A whole-cell computational model predicts phenotype from genotype” Cell: 150:389-401 doi:10.1016/j.cell.2012.05.044
Pingback: “Typical parameter values” | U+003F
Interesting thoughts. Although I think you are using the “frequentist statistician” slur slightly out of context. The frequentist would be interested in the number of samples you used to determine your “tens” of parameters and the subsequent precision of your estimates and thus some sort of precision of prediction. Without a measure of uncertainty no model is valid – you may as well roll a dice.
For the other 1,000 …. of course structure is paramount, but if parameters are unimportant then set them all to 1. Parameters ARE important but maybe the accuracy and precision of such parameters can be greatly relaxed.
Agreed a poorly specified but “complete” model has to be better than an inaccurate (local) precise model – unless there is very low linkage to the rest of the system. For example whether I have a cup of green tea or oolong tea at the tea shop I am sitting at will not affect the price of oil in Alberta.
As you say large scale models automatically induce instability so I still think the reductionist approach is best if possible – we should not be afraid of the highly stable, precise and accurate black box model.
Best, D.B.