Joel Hellewell home

Mini response to “Against Models as Propaganda”

Wrote a quick response with some thoughts and criticism for an article called Against Models as Propaganda by Alasdair Munro and Thomas House. You should read that first.

I think that the article might put too much weight on the idea that going into an analysis with a policy preference is bad. The examples shown are fundamentally scientific errors that may or may not have arisen with an accompanying preference for a given policy.

For example, the people that developed covid vaccines likely went into their paper write up thinking that their vaccine should be used. Why do we not think that this policy preference affects their work? Probably because RCTs are tightly regulated and accountable to external review boards to minimise bias. Their design and subsequent analysis are more standardised than say counterfactual models of infectious disease. The whole point of this process is to make trust in the objectivity of the researcher unnecessary.

Stepping outside of the usual practice, for example claiming that you sequenced some genetic data using your own machine that you built in your garage, is enough for your results to be treated with extreme caution in other fields. Modelling is a bit of a wild west when it comes to standard practice, there is little common agreement over what sensible questions to ask are or how best to answer them. There is a lot of scope within modelling for the improving the shared knowledge base and standardising analyses that are common to many diseases or need to be repeated many times. Here is some good work along these lines about Rt, it would have been handy to have had this to hand at the start of the pandemic.

In general, we should be more brutal in how we greet the development of new models, requiring them to directly feed into (or at least engage with) research consortiums on that area. A good example of this is the forecasting hubs that evaluate the predictive power of models in them. If you’re going to publish your new forecasting model, you should have to show that it has decent performance compared to other existing models on some relevant data. We should be careful that predictive power doesn’t become the only metric to evaluate new forecasting models with, but it’s probably a better metric of model quality than the prestige of the institution of the author (or last author), which is what implicitly happens currently.

The final thing that makes the idea of placing so much emphasis on perceived objectivity in researchers (rather than focusing solely on technical flaws in their analysis) is that over the pandemic I’ve learned that scientists should be beholden to their fickle, anonymous internet fanbase as little as possible. Every camp of covid opinion has its own scientist-champions who are perceived by their followers as perfectly rational, we need to be careful not to let the evaluation of research slip into a popularity contest.

To conclude, we should stick to evaluating these analyses purely in technical terms. In the examples provided maybe you could construct a decent case that preference for a given policy outcome led to these technical oversights. However, that isn’t always the case – you can have technical oversights without policy preference and policy preference without technical oversight. In lieu of the ability to read minds, you must treat your policy opponents with generosity and respect otherwise productive discussion is impossible. If policy preference doesn’t necessarily lead to technical oversights, then inferring a policy preference in someone you disagree with is not a decent way to discredit their work. Instead, post-pandemic we should think about how we can catalogue, understand, and avoid the technical errors, utilising the greatest collaborative extent possible.