ForecastAdvisor Weather Forecast Accuracy Blog

A Blog Documenting the Intersection of Computers, Weather Forecasts, and Curiosity
 
 Subscribe in a reader    Subscribe by email:   Delivered by FeedBurner

 

September 19, 2012

Icon Forecast Bias and Pleasant Surprises

Nate Silver wrote a book on forecasting in many different domains called "The Signal and The Noise". It will be published in a few weeks, but The New York Times excerpted a section on weather forecasting last weekend in their weekend magazine. You can read the excerpt here. It's a great read.

I wanted to focus on one of the last paragraphs in the excerpt from the New York Times. It says:

People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic. “If the forecast was objective, if it has zero bias in precipitation,” Bruce Rose, a former vice president for the Weather Channel, said, “we’d probably be in trouble.”

Specifically, I'd like to shed more light on Dr. Bruce Rose's comment. First, let's look at the second part of his comment, "if it has zero bias in precipitation, we'd probably be in trouble". What is "bias in precipitation"? Bias in precipitation means that a forecaster has a tendency to forecast precipitation either more or less than you would expect given what actually happens. For example, if a forecasters forecasts precipitation 35% of the time, but there is measurable precipitation only 29% of the days, that forecaster is said to have a "wet bias". Conversely, if a forecaster only predicts precipitation 20% of the time over the same period, the forecaster is forecasting precipitation less than what actually occurs. This forecaster has a "dry bias".

Since Dr. Rose worked for The Weather Channel, I thought I would demonstrate using TWC forecasts. Let's say you get home from work, have dinner, and before you retire for the evening, you look on weather.com at tomorrow's forecast. Specifically, you look at the forecast graphic (or icon) to see if there will be precipitation tomorrow. So far this year, if you did that around the country, on average you'd see a precipitation icon 32% of the time. However, for those same locations, there was measurable precipitation only 27% of the time. The graph below shows TWC's one-day-out icon forecasts for each year from 2005 until 2012 year-to-date (January-June). It was looking at around 800 cities in the United States, or about 275,000 forecasts each year. The graph represents a total of 2,063,813 forecasts exactly from 2005 onward.

The graph makes clear that The Weather Channel has a "wet bias", that is, it forecasts precipitation in its icons more than there is actually measurable precipitation. Obviously, if TWC, or any forecaster, could be a perfect forecaster, it would have a zero bias. It would forecast precipitation only when precipitation would occur, and forecast no precipitation when it would be dry. But TWC isn't perfect, nor is any forecaster. So there will be some days it will forecast precipitation, and it will be dry, and some days it will forecast dry skies and it will pour.

Let's look at just how imperfect TWC's icon precipitation forecasts are. We know that perfection is the upper bound, but what would be the lower bound we would expect? Well, since there has only been about 27% of days with measurable precipitation this year, if we always forecast precipitation, our percent correct would be 27%. However, if we always forecast dry days, our accuracy would be 100% minus 27% or 73%. So our low bar would be 73% percent correct. The following graph shows The Weather Channel's accuracy with respect to just such an unskilled forecast. As you might expect, TWC's icon precipitation forecasts do show skill.

So TWC's forecasts aren't perfect. So far this year, they have been wrong almost 18% of the time (for one-day-out). There are going to be days when TWC isn't quite sure if it is going to rain (or snow) or not. The models are diverging, or there is some probability that the front will stall, or something. On those days, do you forecast precipitation, or not? This is where the bias that Dr. Rose talks about comes in. Generally the average consumer of forecasts would rather be pleasantly surprised by a forecast for rain that turns out sunny, than to be caught unprepared for a rain storm. So forecasters like TWC, in general, are going to bias their forecast toward precipitation. That is, when they are unsure, they are going to lean towards forecasting precipitation more than not. And that's where the wet bias happens, and why forecasters predict precipitation more than it actually occurs.

So let's look at these incorrect forecasts. In 2012 so far, TWC has been incorrect 17.7% of the time predicting tomorrow's precipitation with its icon forecast. Those wrong forecasts are one of two types: a forecast for precipitation that ends up dry, or a forecast for dry that ends up with precipitation. The first, type, for precipitation that ends up dry, is a "pleasant surprise". Or as Mr. Silver put it in his book: "People don’t mind when a forecaster predicts rain and it turns out to be a nice day." The second type, or a dry forecast that ends up wet, is "unpleasant". Or again as Mr. Silver puts it: "But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic." The following graph breaks down The Weather Channel's incorrect forecasts by type, either unpleasant or pleasant.

As you can see on the graph, the ratio of pleasant to unpleasant surprises is about two-to-one, though it has been declining slightly since 2007. So far in 2012, for example, 6.7% of the time TWC forecasted a dry day and there was measurable precipitation, but 11% of the time TWC forecasted precipitation and it ended up dry. So when TWC is in error, it is far more likely to be a "pleasant" error than an unpleasant one, due to TWC's wet bias. And this is why Dr. Rose states that if forecasters had zero bias (at current accuracy rates) there'd be trouble.

So let's for fun see if we can discern any patterns in TWC's icon selection algorithm. Can we figure out any rules to when a forecast will be a precipitation forecast? Well, we also collect TWC's probability of precipitation forecast. Are there any patterns to the selection of the icon that are related to PoP? So I did just that for 2012 so far, and I'd graph it for you but it's not very interesting. Basically, TWC won't show a precipitation icon when its probability of precipitation is 0%, 10%, or 20%, and will always show a precipitation icon at 30%, 40%, and higher. Since we know that 27% of the time in 2012 so far there was measurable precipitation, what that says is that when TWC believes there is any greater than climatology average change of precipitation, it will display a precipitation icon in the forecast.

What would happen if we change TWC's current rule which is to display a precipitation icon at a PoP of 30% or higher? How would that change the icon's accuracy and other properties? The following table and graph shows this for 2012 so far (January through June) for one-day-out forecasts. The highlighted row at 30% is what the accuracy properties would be if TWC placed a precipitation icon when PoP is 30% or higher, and a non-precipitation icon below 30%, which is exactly what they do. If TWC placed a precipitation icon at 0% or higher (the first row), that would mean they always place a precipitation icon. In that case, they'd be right on precipitation days (27.21% so far this year) and wrong otherwise. Conversely, if they NEVER place a precipitation icon, they would be right 72.79% of the time. Of course every time it rained or snowed that would be an unpleasant surprise as it would not have been forecasted. Sensitivity is a measure of how well the forecast identifies precipitation days. It is the percentage of correct forecasts, given that there was precipitation. Specificity is a measure of how often the forecast doesn't forecast precipitation on dry days. Icon Precip is just the total percentage of icons that would be precipitation icons under that rule.

There are a few interesting items that pop out. One is that accuracy as measured by the percent of correct icon forecasts is maximized when the rule is to only display a precipitation icon at 40% PoP or higher. That gives us a percent correct of almost 84%. The current rule TWC uses is only about 82%. Even though the current rule has a slightly lower accuracy, it is most assuredly of far greater value to TWC's customers. This for two reasons. The first is that pleasant surprises are preferred over unpleasant ones. And only thresholds at 30% or under have more pleasant surprises than unpleasant ones. The second, and probably more important reason, is that at a 30% threshold, you maximize the average of sensitivity and specificity. Stated another way, it's the point at which the distance to perfection (100% in both sensitivity and specificity) is minimized. Now that's assuming that you value sensitivity and specificity equally. But that is a whole other topic!

I hope you found this informative. There is more to come. In the meantime, please check out Dr. Eric Bickel's page here which has some interesting tables on TWC's PoP forecasts. You can also check out the paper he authored and I co-authored about Probability of Precipitation forecasts that appeared in the Monthly Weather Review titled Comparing NWS PoP Forecasts to Third-Party Providers.

 

 
Archives

December 2005   January 2006   March 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2009   September 2009   March 2010   April 2010   February 2011   April 2011   June 2011   February 2012   September 2012   June 2013   October 2013   February 2014   Current Posts