Modelling our way to the answers

RockWater&Ice

Lake ice laps the shore of Waterton Lake in spring (Photo: S Boon)

I ran across a press release recently for a paper on snowpack and climate change in Oregon. It caught my attention because it was written by a PhD student I’d spoken with at length while on sabbatical at Oregon State University. Though I’d missed his defense because I was  visiting the HJ Andrews Experimental Forest, I’d heard from colleagues that he’d done well despite the assumptions underlying his modelling work. There was some question as to the validity of these assumptions, and thus how applicable the results might be.

This didn’t surprise me.

Modelling is constrained by assumptions; in many cases we have no choice but to assume certain constants or a priori facts in the absence of data with which to parameterize and/or calibrate the model. Alternatively, we may be producing scenarios for future conditions that can’t be tested, because there aren’t yet any data against which to compare them.

The problem arises when model outputs are cited as fact, independent of these assumptions. When there are no qualifiers or uncertainty bars provided around the model outputs. This is what caught my eye in the press release: there was no indication of error bars or ranges on model estimates.

It’s tempting to ‘sell’ model output as fact. In the current science climate, where applied research is championed at the cost of basic research, there’s often a lot of pressure to do just that.

In a recent project, we modelled the response of snowcover in a small watershed to both climate change and forest disturbance (mountain pine beetle, wildfire, clearcutting) (note that the report available at this link has materials/methods & tables/figs in Appendices). The funding agency specifically requested a list of operational implications of the project, which our industry and government partners were very interested in. However, their language was focused largely on study results telling us ‘what will happen’ rather than ‘what might happen given these assumptions’.

Since I was very well aware of those assumptions, I was quite hesitant to go the ‘this is what will happen’ route. Thus you’ll notice in the report that – while we did supply a list of implications – we qualified it by noting that the modelling was ‘preliminary’, that the results ‘suggested’ certain outcomes, and that ‘further research was required’ to validate those outcomes.

What I specifically meant was that we needed to perform a more detailed sensitivity analysis of the model. It was especially important to figure out how sensitive model output was to the way in which we’d defined the hydrologic response units (HRUs). Would we get the same results if we used more HRUs, or if we used the same number but with spatial boundaries defined by slope aspect rather than by elevation? We also needed to do a more detailed analysis of the input dataset, as some of the difficulties in model calibration could be linked to potential errors in input data.

However, these caveats don’t make modelling an exercise in futility. My take on numerical modelling is that it helps us understand the mechanics of environmental systems, if not the absolute outcomes of changes in those mechanics.

For example, one of the conclusions of our modelling was that increasing temperature far outweighed any changes in snowpack due to more precipitation. So even if we get more snow, increased air temperatures will still melt it faster.

We also found that one of the big differences between disturbance scenarios was the (de)synchronization of snow melt across a watershed. Why is this important? Because if it’s synchronized (as under the wildfire scenario), you basically get all the snowpack melting at once, causing a large meltwater flush to the stream system. If it’s desynchronized, however (as under the pine beetle scenario), melt happens at different times across the watershed and the stream system is largely unaffected. You can imagine that each melt type would have different impacts on streamflow and downstream geomorphic processes.

While the modelling couldn’t specifically state that in 2050 we’ll have 25% less snowpack in the Okanagan region, it did tell me that we really needed to focus on the spatial distribution of melt across entire watersheds, and that we needed to incorporate the subtle – but important – effects of both slope aspect and elevation on snow processes.

Although I’ve used a specific case study to address the whole ‘modelling as reality’ question, there’s an entire field of inquiry focused largely on how to communicate scientific uncertainty to the public and policymakers. How do we present that uncertainty without reducing the efficacy and impact of our research results?

For more on this conundrum, see

Leave a comment