According to McGill University physics professor Shaun Lovejoy, the primary reason that weather forecasts are so often wrong is that they don’t make use of the atmospheres “long term memory.” By making greater use of historical climate data, he has developed a system that can better predict weather of periods of more than 10 days.
His new system is more accurate than others because most weather systems rely on computer models, which would be great at predicting the weather if it was more stable. Unfortunately, weather patterns are essentially random; although there are factors that influence them, there are so many factors acting at the same time that it can be very difficult to determine what will impact which area. As such, computer models, which cannot adjust for the randomness, create patterns, which are pretty useless outside of about a ten-day window.
Essentially, the standard models only make use of the atmosphere’s “short term memory,” and only rely on recent data, while professor Lovejoy’s new model pulls on the “long term memory” of the atmosphere, and can better adjust for large scale changes.
Part of the process requires the acknowledgement that temperature increases are followed by decreases, in a fluctuating pattern. This actually helps to explain the “pause” in global warming that we’ve seen since 1998. Climate change hasn’t stopped; we’re just in one of the lower temperature portions of the fluctuation. And, he argues, if emissions rates stay at their current levels, we can expect that pause to end by 2020. We’ve already seen record high temperatures this summer, especially in the Pacific Northwest, so chances are, things will only get worse.
This new method of weather prediction could make forecasts much ore accurate, especially over season long periods. It could also help us to construct better long-term models of climate change, which often fall into the same traps as normal weather forecasts.