Ontario Premier Doug Ford is pictured during a news conference after a meeting with Canada’s provincial premiers in Toronto, Ontario, Canada December 2, 2019. REUTERS/Carlos Osorio
Ontario Premier Doug Ford is pictured during a news conference after a meeting with Canada’s provincial premiers in Toronto, Ontario, Canada December 2, 2019. REUTERS/Carlos Osorio

On Friday, the Ontario government released scenarios modelling the impact of COVID-19 – and they were stark.

In the worst-case scenario – which considers a situation in which no measures are taken to restrict the transmission of the virus – 300,000 people would be infected by the end of April (there were, as of April 2, 3,255 confirmed cases and 67 deaths in the province). Of these, a case-fatality rate of 2 per cent means that 6,000 people would be estimated to die. More than 3,500 people would require treatment in intensive-care units (ICUs), and Ontario currently has an ICU capacity of about 600 beds, with only 900 additional beds that could be redeployed to ICUs. In the fullness of time, over the course of the pandemic, these figures mapping out a world without policy intervention would rise to 100,000 deaths in Ontario and five million infected (a number inferred by the 2-per-cent case-fatality rate assumed in the short-term projection). In short, in this worst-case scenario, Ontario’s health system would be completely overwhelmed.

Fortunately, the model that considers policy interventions, such as the ones currently in place, produces dramatically better figures. In this scenario, by the end of April, the estimated number of infected would be 80,000, and expected deaths would fall to 1,600. The peak ICU total would be within the province’s capacity, leaving room for the system to deal with other critical health issues on a more or less normal basis.

It is good that the public has access to these projections. They help guide planning and decision-making, and they allow policy makers to assess risks and allocate resources. We want and need this information to feel like what we’re doing is working.

But there is another scenario that isn’t being discussed from the models released – one that is much grimmer, and reflects a different interpretation of the data.

To understand that, we must understand how models are created and how the tricky business of interpreting data is done.

As data is collected over time, it will reflect a wide variety of influences. In this instance, those influences include who was tested, how public-policy measures and enforcement have influenced individual behaviour, what additional efforts might be necessary, and how hospitals are coping.

To assist in the interpretation, quantitative models are typically used to create scenarios. A model is a very simplified description of that process: it starts with a basic set of statistics, provides a structure that describes the process, introduces estimates of the parameters, and then generates outputs which are the scenarios of interest for users. The model builder then “calibrates” it to reproduce historical data or trends. The modeller chooses assumptions from within the empirically supported range, and runs simulations to show the consequences.

But while model estimates can be an extremely valuable contribution for planning, they can also give a false sense of precision, if the plausible range of values for assumptions is not considered.

Take our now-familiar epidemiological curve of new infections over time, for instance. To what extent and how quickly will the approach of physical distancing, hand-washing and stay-at-home behavioural change slow the accretion of new infections? How much does this depend on compliance by citizens, and how is compliance measured? Will the impact be transitory or permanent? Each assumption will generate a different profile and estimate of the number of infected and deaths. Even as these numbers of infected are disseminated, they are being revised because of such issues as lags in testing, delays in getting test results, and the inevitable correction of reporting errors. To add to the confusion, the stats are being used in different ways by politicians, news media, experts, and others.

So let’s focus on the just-released Ontario numbers, and the assumptions that underlie them.

The figure of 1,600 dead by April 30 is based on a case-fatality rate of 2 per cent, consistent with Ontario’s actual rate to date of 2.1 per cent of total cases. But strictly speaking, case-fatality rates (CFRs) are based on resolved cases, in which all patients have either recovered or died – so the Ontario estimate is therefore not the final one, as it is based on a situation that’s still in flux, where the majority of cases are still active. This represents a significant difference for interpretation. For Canada as a whole, the CFR calculated the way that Ontario presented it is 1.7 per cent – but for resolved cases, it is 8.7 per cent. The final CFR is likely to fall somewhere in between.

Indeed, the true-case fatality rate of the novel coronavirus is still unknown. For Germany, the comparable figures are 1.4 per cent for total documented cases and 4.9 per cent of resolved cases; for France, they are 10 per cent and 32 per cent, respectively. Even though comparisons with other countries need to be interpreted with caution, the implication is that the final death total for the 80,000 assumed cases as of April 30 could be substantially higher by the time all these cases are resolved.

Similar concerns arise with the ICU modelling. Ontario’s model assumes that only about 1 per cent of the 80,000 cases registered by April 30 will require ICU treatment, but the current rate of about 190 ICU patients out of a case total of 3,255 is almost 6 per cent. If this rate continues, the 80,000 total cases would result in about 4,670 ICU beds needed – more than four times the number anticipated, and more than three times the number of beds planned for.

That means that, under the current policy setting, there appears to be a material risk that mortality could be substantially worse, and that our capacity to handle the pandemic could be overwhelmed.

To account for the material risk, policy measures need to be ratcheted up, and this must be done immediately with clear explanations as to the expected impact of each measure. As the old saying goes, we should hope for the best, but plan for the worst.

Given the potential for confusion, we must also establish a trusted source of public scenarios by province and a consistent aggregation to generate as clear a Canada-wide picture as possible, with explicitly laid-out assumptions. These can then serve as the basis for further scenario analysis and be complemented by alternative types of models and assumptions as necessary.

Ontario has helpfully started the discussion by presenting its working numbers. We all need to build on this through our actions and analysis, and it is important to feed hope. But as we hunger for information so that we can sort out the potential path of the pandemic and our stay-at-home policies, these models must also be put in their proper context.

The article originally appeared in The Globe & Mail

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Dan Ciuriak is a senior fellow at CIGI, where he is exploring the interface between Canada’s domestic innovation and international trade and investment. He is the director and principal of Ciuriak Consulting, Inc.

  • Robert (Bob) Fay is director of research for digital economy at CIGI and responsible for research direction and related activities. He has extensive experience in macro- and micro-economic research and policy analysis.