Network News

X My Profile
View More Activity
The new Washington
Post Weather website
Jump to CWG's
Latest Full Forecast
Outside now? Radar, temps
and more: Weather Wall
Follow us on Twitter (@capitalweather) and become a fan on Facebook
Posted at 10:45 AM ET, 12/28/2010

"False Alarmageddon" rebuttal

By Steve Tracton

Jamie Yesnowitz, Capital Weather Gang's (CWG) "Weather Checker" referred to the uncertainty aspect of the CWG forecasts, but yet called the performance of CWG as "less than accurate". This suggests to me he might not understand the basic concepts of probability.

No single probability forecast can ever be judged right or wrong, unless it is either 00.0% or 100%, i.e., absolute certainty. Even if the probability for heavy snow were 90%, there is a 10% chance it would not verify. It would be meaningful if and only if you had a large enough sample to say that under similar situations CWG did or did not get it right 90% of the time (the same is true with any estimate of probability).

This should not be taken as a cop-out or a way out of ever being wrong or making a busted forecast. Ultimately the forecasters experience and best judgment must commit to their best single bet. But almost invariably that will have some degree of uncertainty, i.e. relative degree of confidence and it should not be characterized as inaccurate if the forecaster(s) effectively communicates that uncertainty.

A single best bet will satisfy those for whom the level of uncertainty may be immaterial (but they'll complain if the best bet turns out wrong). For others, information on the uncertainties may be extremely consequential to their needs and requirements.

The purpose of providing confidence measures, whether probabilistic or qualitative (e.g., high, low, or medium), is to enable users of weather information to make weather dependent decisions, whether changing personal travel plans, deciding whether or not to stage snow removal equipment, etc., etc.

It boils down to what I refer to as the "threshold of pain". That is, how much risk are you prepared to assume from a cost/benefit point of view. Are you willing to change your flight reservations knowing the odds of flight cancellation were 10%, 50%, or not until 90%? At what level of confidence are you willing to risk making the wrong decision versus how great you'll feel if it turns out to be just the right move?

The threshold would obviously be different when the consequences of the decision are starker. Would you get on a plane knowing the odds of an accident for some reason were (unrealistically) 1% or avoid boarding only when the odds were greater than 50%? Is it better to stage plows and salt trucks and wind up not needing them or ignore the threat at some level of probability knowing that if it does snow road accidents and worse might occur?

The screaming message is that CWG continually let it be known this particular case was characterized by large uncertainty up to and including after snow began to fall around the metro region. Most importantly, CWG is skilled in being able to discriminate between cases like this from those which are justifiably more predictable, such as was true with snowstorms last winter.

In referring to the Saturday afternoon forecasts of 3-6" on the snowfall amount chart, Jamie did not acknowledge that CWG had indicated about equal odds from <1" to 12+ inches. By definition, this indicates that the actual amounts and distribution of snowfall were essentially unpredictable. What this translates to in actions or snow dependent decisions is completely user-dependent and their threshold of pain - varying from it not mattering, taking a chance on the "best bet" forecast (3-6") as the gospel, or playing the odds of departures from best bet in context of some specific weather dependent decision.

Finally, Jason got it just right: he, Wes, etc followed the best practices in modern weather forecasting and communicating weather information, and are most certainly leaders in the pack of any other local source, and this may be true nationally.

This appraisal might seem biased due to my affiliation as a member of the Gang, but the approach employed by CWG is exactly what, for example, independent reviews of the forecast process advise (see, for example, recommendations from the National Research Council).

(These views are those of the author alone and intended to promote discussion...)

By Steve Tracton  | December 28, 2010; 10:45 AM ET
Categories:  Capital Weather Gang, Latest, Tracton, Weather Checker  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   StumbleUpon   Technorati   Google Buzz   Previous: Forecast: Winds wane, warm 2011 welcome
Next: Should the Eagles game have been played?


We want a scapegoat.
signed - Angry snow-deprived DC mob

Posted by: FIREDRAGON47 | December 28, 2010 11:01 AM | Report abuse

My "threshold of pain" is being able to get up my driveway. It's a little steep - such that a FWD car can only be backed up it in snowy conditions like the storm of two weeks ago. This forecast indicated to me that I needed to buy and apply snow-melt, but that I probably didn't need to spring for the 28" dual-stage snow-blower quite yet. The forecast told me what I needed to know, and I was able to keep my driveway clean w/o making an unnecessary purchase.

Posted by: mason08 | December 28, 2010 11:18 AM | Report abuse

I think the problem is that you use the probabilities as a way out of getting a forecast wrong. How often are you ever 100% certain of anything? There's always going to be a chance that things don't turn out the way you say they will. The problem with your argument is that it leaves you no room for accountability. If I told my boss that there was a 90% chance of a contract coming in, or even a 55% chance of it coming in, he's expecting the contract to come in. If it does not, my boss is going to be upset and he deserves to be upset with me. I cannot tell him "well, how much risk are you willing to take; there was a 45% chance of it not coming in". I think what people want to see is for you to take not only what a computer model is saying, but your gut instinct and intuition and say that something either is or is not happening. You cannot rely only on computer models, and even then you cannot continue to give "probability updates" that change drastically within several hours. You need to make a decision and stick with it. I love CWG and I cannot praise them enough for their forecasts. However, they should not have changed their forecast on saturday based on a few model runs; I bet their gut instinct told them otherwise but they relied too much on the influence of the NAM and GFS model runs that showed more movement to the west, when everything prior (except for the euro) had showed otherwise.

Posted by: SteveinAtown | December 28, 2010 11:36 AM | Report abuse

Great write up!

Now I now why you are renowned as "a principal agent and advocate in development, application, and use of operational ensemble prediction systems and strategies for dealing with forecast uncertainty." (found in the mini biographies for each CWG member)

You truly live up to those words.

Posted by: Yellowboy | December 28, 2010 11:47 AM | Report abuse

All snow lovers: We mustn't give up hope yet - - there are still two perfectly good months of winter left. Now, will we get a lot of snow during those months? Well folks, that's a question to which I do not have the answer. None of us do. Only Mother Nature does. So have some fun and don't waste your life worrying about getting snow or not getting snow - - and just for the record - - DC on average gets a big snowstorm about once every 5 or six years, so getting ANOTHER big one this year after last year's snow if highly doubtful.

Posted by: BobMiller2 | December 28, 2010 11:57 AM | Report abuse

I appreciate both the initial evaluation and the rebuttal to CWG forecast performance. This type of discussion is what makes CWG coverage world-class.

Tracton's rebuttal makes the best points. CWG did an excellent job in expressing how especially difficult and unsure they were in this particular forecast for an inherently difficult region. The chart that sticks out in my head was the one with almost equal probabilities of nothing to over a foot of snow, and that was the day before the storm. The day before! At first inspection this would seem like a laughable forecast that a professional would be ashamed to admit their short-comings. But, CWG, embraced the incredibly complicated fascists of their forecast and successfully communicated that even as advanced as the science of meteorology is, sometimes mother-nature will do what it wants, when it wants, where it wants, and all we can do is sit back and observe in wonder.

It is not a cop-out to give such low probabilities for this particular forecast because last year during the major snows it was amazing how extremely confident and accurate CWG was in predicting large amounts of snow usually days in advance with only slight adjustments as the storms approached.

Posted by: caphillse | December 28, 2010 12:06 PM | Report abuse

This post from Steve Tracton is a bit much to take in view of the snarky and juvenile post he made several weeks ago satirizing a fellow and competing meteorologist from Accuweather who dared to make a snow coverage prediction at the end of November which has since largely verified.

In my view, this current post might have some weight if Tracton's prior post would be acknowledged as ill considered, and worse, wrong.

In the meantime, it is clear no forecast is guaranteed and no forecast service gets it right 100% of the time. I think Jamie had it just about right.

Posted by: ubimea | December 28, 2010 12:11 PM | Report abuse


If you are not totally sure the contract will be coming in, that should be useful information to your boss. Of course you can give him your best guess only (yes or no) and be a hero or scapegoat depending upon the outcome.

But if you are not totally sure, the boss (any boss) should be open to knowing that along with whatever degree of confidence is judged reasonable. If the calibration of uncertainty is reliable, the boss presumably would recognize that the contract comes in much more often when the confidence is high than low.

If the advice is just a simple yes/no, you are effectively making the decision for your boss on whatever action or position he takes with your forecast. Forecasters provide the best information, it's up to users to make the decisions.

The best example is the decision to go ahead with the D-Day invasion on June 6th. There were three independent forecasts (3-member ensemble), two saying weather would cooperate and one saying it would be bad. Eisenhower decided to go ahead on the basis of the greater chance of favorable conditions. Fortunately, it worked out; but, if the 3 forecasts were equally likely (as should members of any ensemble), it could have gone very wrong and history would have turned out much differently". Other commanders might have insisted that the forecasters agree on a single yes/no, in effect having them make what should be his decision.

Posted by: SteveT-CapitalWeatherGang | December 28, 2010 12:11 PM | Report abuse

This particular non-event is a symptom of the overall problem. Time and time again, we get predictions worthy of the Book of Revelations, only to get a dusting - if that. People hear these forecasts and make preparations like they won't see daylight for days on end, only to get a "Hooey, we sure dodged a bullet on that one." We've had the equivalent of blizzard warnings that caused school systems to cancel the next day's classes, only to have 50 degree sunny weather instead. We've had promises of "maybe a dusting" that we're spot-on, as long as you consider 10 inches of snow a "dusting." "Weather science" isn't science at all. On its best days, it's a series of quasi-educated guesses based on models that frequently conflict with each other. When in doubt, always invoke something like the Chesapeake Bay effect. Nobody knows what it means, of course, but it sure does sound scientific. There are two professions in this city that are guaranteed employment regardless of how often you get things wrong: Metro escalator repairmen and weather forecasters.

Posted by: hofbrauhausde | December 28, 2010 12:20 PM | Report abuse

They hardly predicted a blizzard, folks. It was a forecast for 3 to 6 inches, not really a lot of snow. It would be one thing if the forecast was for 10 to 20 and we got nothing. But in terms of bust, this really wasn't that bad of one. Why wasn't everyone this upset last year when the forecast heading into Dec. 19 was like 8 to 16, only to bumped up to 16 to 24 or something after the snow started. That is just as much of a "bust" as this when you factor in difference of initial predictions. vs. reality.

Posted by: realclear | December 28, 2010 12:26 PM | Report abuse

First of all, and with apologies to Mr. Yesnowitz if I missed something in his resume, I wouldn't want a post-mortem analysis on a weather event published on the Washington Post by an attorney with a long standing interest in weather, any more than I would want a legal analysis by a professional meteorologist with a long standing interest in the law. I am sure that next time the WaPo will be able to find an external meteorologist with excellent communication skills, to do these analyses.

Second, I am convinced that "we don't know", or "anywhere from no snow to a foot", should be acceptable answers, when they reflect the best judgment of a trained professional meteorologist. They do convey information. If there is a little food for thought for the CWG it may be whether they succumbed a bit to the prevailing opinion in the comments that "the more snow the better" (in my opinion, only occasionally, and only a little, in an otherwise superb coverage).

A final thought. If you download the description of what is inside the Euro model (go to you'll have a feel for how complex it is, and how many assumptions and approximations go into it. I'm sure that there is similar documentation with the other models, which will show the same thing. So Monday morning quarterbacking in a newspaper blog is great fun, and should be encouraged IMHO. Second-guessing weather calls for something more serious, where maybe even life and property may be at risk, should be done with much greater care.

Posted by: MikeinDC2 | December 28, 2010 12:37 PM | Report abuse

Many of you have been talking about some huge blizzard coming in January... Yes, it would be nice for that to happen, but the likelihood of that storm developing, and dropping boatloads of snow on us is very slim right now - - maybe a 10% chance. Anyway, the moral of this is don't get too excited when you see one computer model that is predicting a blockbuster blizzard if it's still a month away. But who knows, maybe it will happen; maybe it won't. Only mother nature knows for sure...

Posted by: BobMiller2 | December 28, 2010 12:49 PM | Report abuse

There is definitely too much dependence on computerized models in contemporary weather forecasting. I'm like Sheepherder Bob; I'd like to see intuition, a study of radar patterns, and historical knowledge factored into forecasting, rather than today's increaingly heavy dependence on advanced techno-models ("we got this really neat tool...") that don't always factor in nature's quirks.

Looking at some of the great storms since the late 1800s, it's not uncommon for them to bypass Washington and slam areas from Philadelphia nortward. The Blizzards of March 11-14, 1899, February 5-7, 1978, and the April Fool's blizzard in 1997 are good examples of how Washington missed out. I of course wasn't around in 1899, but the 1978 storm did bring an inch or two of snow to DC, about 9" to Baltimore and copious snows further NE (including 39" at Providence, RI). The April Fool's storm brought a few changeover flakes here whereas NYC, etc. were hammered.

What WAS unusual about the recent storm was that many parts of the south had snow. But as noted a couple of times before, the way was storm was tracking nearly due eastward across the northern Gulf Saturday evening toward NE Florida, and the direction the direction precip was streaming on Mosaic (Ok, that was on a PC...) were good indications this storm would largely bypass Washington.

One more instance of how nature overtook the models: the fast-moving violent thunderstorm that slammed into Washington on July 25. The NWS was way late in posting alerts, when it was obvious by mid-late a.m. on computerized radar that a mean squall line was headed directly for our region. (CWG, at least, started to flag that storm far enough in advance for people to prepare for the onslaught.)

Models can be very useful (c.f. Katrina and the Gulf Coast) but they shouldn't be the end all and be all of weather forecasting, as increasingly seems to be the case.

Posted by: JerryFloyd1 | December 28, 2010 12:51 PM | Report abuse

It's been said that:

"The trouble with weather forecasting is that it's right too often for us to ignore it and wrong too often for us to rely on it".

Posted by: SteveT-CapitalWeatherGang | December 28, 2010 12:53 PM | Report abuse

Not sure about a boss who treats an event with a 55% probability as if it were certain to occur. Or who makes no distinction between an event that is 90% likely and 55% likely. The first one (55%)is highly uncertain and is almost equally likely to fail as it is to succeed. If you want to "make a call", use your gut, and prepare as if it were certainity, that is of course fine, but you can't predict such an event with much more success than whether a coin is going to give heads or tails on the next toss. Some people will be right predicting such a situation and appear skilled and others will be wrong. What is attributed to gut is most of the time pure chance.

I'd much rather know the odds of an event and make my own decision, than have somebody give blanket yes/no information.

Posted by: Finn1917 | December 28, 2010 12:55 PM | Report abuse

We want the head of Wes Junker.
He said he was bullish.

Bring us his head on a platter.....a platter engraved with a thousand little snowflakes.
Only then will we be satisfied.

Posted by: FIREDRAGON47 | December 28, 2010 1:00 PM | Report abuse

This was one of the most difficult forecasts ever. When you have multiple models, all flip flopping every which way, there is no way to either rule out the storm or say it with certainty how much snow there is going to be.

The fact of the matter is that we were dealing with a historic blizzard that set snowfall records from South Carolina to New England. There is no meteorologist in the world who could predict with certainty that the storm would swerve around DC and leave us in a tiny snowless hole.

This has always been an inexact science and even with huge advances in technology, there are still going to be errors. All you can do is give your best shot and hope for the best.

Posted by: frontieradjust | December 28, 2010 1:00 PM | Report abuse

The forecast was for three to six inches, right???

What if D.C. got 16--24 inches? Snow lovers would be very happy, but ThinkSpring and the rest of the anti-snow crowd would be screaming at CWG about another busted forecast!!!

What irks me is how those "lucky" New Englanders get a huge snowstorm time after time while we get stuck with nothing but cold desiccating northwesterly GALES which seem to go on and on for days without end. This storm was highly unusual in that we got nothing while coastal residents got buried in snow. Normally we and the folks on the coast get nothing but cold bone-chilling rain, but people in the Blue Ridge get huge snow accumulations.

Posted by: Bombo47jea | December 28, 2010 1:08 PM | Report abuse

I don't think confidence was ever high on this, I pretty much tracked it for the entire week and at no point was there certainty anywhere on this thing so no worries. I think Steve laid it out best as I want probabilities and to be informed, I can then make my own decisions based on that information. The world is not perfectly Newtonian, all variable are not fully understood, weather is not a binary output. Forecasters work within the bounds of what tools and methods are accepted by the science, none of which are a completely accurate model of truth and given the turbulent nature of this subject are likely to remain approximations and statistical in nature for the foreseeable future.

Posted by: miglewis | December 28, 2010 1:10 PM | Report abuse


Do you remember my friend Jerry Grossman? He spoke very highly of you.

Jerry died peacefully in his sleep in August 2007 at age 64. He had heart problems.

Jerry was one of the smartest person that I have known and also one of the nicest. His knowledge of meteorology was exceptional and his forecasting ability sometimes bordered on the supernatural.

I wonder how Jerry would have done trying to forecast this last storm.

Posted by: frontieradjust | December 28, 2010 1:15 PM | Report abuse

Monday morning quarterbacking has taken on a new dimension, thanks to the Internet.

Kudos to CWG for their tireless effort to keep us informed of the latest forecasting developments in the lead-up to the near-miss blizzard.

I'm a snow lover and was rooting for the blizzard enveloping DC, but am feeling a bit less nostalgic for last February's storms after seeing today's news videos of New Yorkers climbing over snow banks. Did everyone see the video of the NYC snowplow crushing a parked SUV? [warning: profanity]

Posted by: Gidgmom | December 28, 2010 1:19 PM | Report abuse

I completely agree with miglewis. I think the only thing CWG - and forecasters in general can do - is give us the knowledge they have and interpret as best they can.

Posted by: kathyb39 | December 28, 2010 1:21 PM | Report abuse

One thing I suggest is to place the confidence indicator at the beginning of the post rather than the end. While ideally everyone reads the entire post, the reality is that most only read the first couple paragraphs (or even just the pictures accompanying the post).

Posted by: nlcaldwell | December 28, 2010 1:23 PM | Report abuse

I've referred to this in the past (but can't remember when) that weather prediction is most certainly not the only arena where uncertainty rules. A prime example is the medical profession, e.g., the relative likelihood this or that medication might be the one to address some illness, which itself may or maybe the correct diagnosis with some level of confidence.

Just as we expect science and technology to improve the capability to get the diagnosis and treatment right with a higher level of confidence, it's not unreasonable to hope that future developments in meteorological science and computer models will do the same for weather forecasting. Until then we do what we can do to the maximum possible

To read more about how uncertainty plays into medicine just as weather prediction, see my article "When Docs (Weather Forecasters) Are in Doubt" (PAGE 10) at:

Posted by: SteveT-CapitalWeatherGang | December 28, 2010 1:25 PM | Report abuse

@Gidgmom...OMG! "Tell your supervisor to come with a lawyer!".

Posted by: kathyb39 | December 28, 2010 1:29 PM | Report abuse

@Gidgmom...OMG! "Tell your supervisor to come with a lawyer!". Thanks for making me feel a little better about not getting hammered.

Posted by: kathyb39 | December 28, 2010 1:29 PM | Report abuse


Yes, I certainly remember Gerry and totally agree with your description.

Who are you? You can contact me offline if you want by email (

Posted by: SteveT-CapitalWeatherGang | December 28, 2010 1:31 PM | Report abuse


I understand your issue about having a non-specialist review our forecast but that's kind of the point. We want to feature the perspective of an educated weather consumer but non-expert because that's mainly who our audience is. Jamie provides insight in to how someone who's interested in weather - but not an insider- views our product.

Posted by: Jason-CapitalWeatherGang | December 28, 2010 1:42 PM | Report abuse

CWG made absolutely clear that the models were all over the place, and something as small as a 25 mile jog to the west or a quicker deepening of the low pressure system could have given us snow, and a lot of it.

I think Jason said there came a time when you just had to get the nose out of the models and look at the radar. He also underscored the difficulty of predicting snonw in our region. He gets props on both.

I don't think that CWG has to defend or rebut. Saying we have a 50% probability of snow, but there are so many unknown factors there's a low confidence on it is perfectly correct. Sometimes the (statistical) error term wins the day.

Posted by: kperl | December 28, 2010 1:54 PM | Report abuse

@Steve, as weather forecasting becomes more technically advanced, I hope the humans who interpret the models continue to realize that computer-based technology cannot completely subsitute for historic knowledge and keenly honed intuition, both of which are still permissable in today's society.

As the non-existence stormstorm failed to materialize, I was watching Fritz Lang's "Metropolis", a reminder of what happens when technology becomes societally oppressive and our master, rather than our servant.

Posted by: JerryFloyd1 | December 28, 2010 1:55 PM | Report abuse

Steve - you hint at the issue without acknowledging the fundamental problem with CWG's forecast. "It would be meaningful if and only if you had a large enough sample to say that under similar situations CWG did or did not get it right 90% of the time." Exactly. And if a forecast can never be verified, as with the probabilities in this instance, the forecast, itself, is meaningless. You've tried in the past to explain how you arrived at, say a 55% chance, and the answer was there is no methodolgy. There's no basis to claim you are giving useful information because you throw out specific percentages, when you can't support the odds you give with any rigorous basis other than a feeling. Couple that with the frequent issue that the odds you do post are usually spread so thin (50/50, 30/30/30, or "CWG had indicated about equal odds from

I actually think CWG did a nice job, with Wes's valuable input, leading up to this event. In fact, the forecast really only became muddled when you tried to pin percentages to a "most likely" outcome. As noted earlier, closer to onset there was poor communication in terms of decreasing then immediatly increasing "the odds" without an overall forecast of the storm progresion beyond the latest model. Rather than reacting with a thin-skinned defensive response, look for areas of improvement. Statements like "This suggests to me he might not understand the basic concepts of probability" suggests to me you might not understand how to handle constructive criticism.

Posted by: manatt | December 28, 2010 1:59 PM | Report abuse

Over the past 14 months in Northern Virginia, I have found that I can predict the coming two-days' weather at least as accurately as the CWG's by looking out my back door of a morning. And I can say the same for the forecasts of the Bob Ryans and Doug Hills of the region.

So many instruments and calculations, so much bull s--t.

Posted by: kinkysr | December 28, 2010 2:16 PM | Report abuse


A serious question: How would you recommend communicating uncertain information without probabilities ? You've objected in the past to using probabilities but now you're indicating you don't think we should provide our "best bet" (aka the "most likely" option amongst a set of other options) either. So, if we can't have it either of those ways, what constructive changes should we make? Thanks in advance for your input.

Posted by: Jason-CapitalWeatherGang | December 28, 2010 2:18 PM | Report abuse

'We've had the equivalent of blizzard warnings that caused school systems to cancel the next day's classes, only to have 50 degree sunny weather instead. We've had promises of "maybe a dusting" that we're spot-on, as long as you consider 10 inches of snow a "dusting."' - hofbrauhausde

Neither of those have happened this millennium. There's been a massive and highly unappreciated advancement in forecasting in the last couple of decades, and totally blown forecasts virtually never happen any more.

What amazes me is that forecasting has gone from seeing a storm that's over Illinois and realizing it's going to be here tomorrow, to figuring out from the models that a storm is going to form and even have a pretty good idea of where it's going to go. This recent storm, in particular, was subject to intense discussion on this blog days before it even existed. The only error in the forecast was some hours on the timing and a hundred miles or so on the track.

Posted by: kevinwparker | December 28, 2010 2:32 PM | Report abuse

You wake up in the morning. You see what the weather is, and you deal with it accordingly. The weather is something cannot control, so there's no need to apologize because it's really nobody's fault.

Posted by: PracticalIndependent | December 28, 2010 2:34 PM | Report abuse


Your expectations for precision are audacious. Have you seen the map of snow cover? All but maybe a 50-mile wide and 100-mile long strip of the East Coast from South Carolina north was buried in snow. CWG repeatedly cautioned that the storm could skirt by us to the east and leave us with nothing to speak of. What is wrong with all of the precautions taken to no avail? What's the ultimate consequence? Does it even matter today, much less next week, next month, a year from now? Perhaps in some cases some people lost a little money oe time with their relatives, but I'm pretty confident they'll recover in the long run. What's the obsession with micromanaging life events? Take it as it comes, and be happy for another day.

Posted by: markf40 | December 28, 2010 2:34 PM | Report abuse


No, you can't. I don't know why you would make such a ridiculous claim.

Posted by: mason08 | December 28, 2010 2:38 PM | Report abuse

There is a phrase coined in 1977 by a well respected former NWS meteorologist, Len Snellman: "METEOROLOGICAL CANCER"

He defined this as the increasing tendency of forecasters to abdicate practicing meteorological science and becoming more and more just a conduit of information generated by computers.

I see this thought breaking through some of the comments above. Indeed, I remember the heated discussions, debates and some tirades connected to this issue when it was first raised.

Suffice it to say here given the improved education and training of forecasters, their direct or indirect connection to the basic science of meteorology, and motivation to put all the pieces of the complex puzzle together (models and otherwise) in the most informative way possible, the problem is much less relevant today than 30+ years ago.

I'm not saying it doesn't exist, for even some of the best and most experienced forecasters might on occasion have a tendency to fall back into relying too heavily on whatever the computers crank out. But, not generally so, and certainly not, except possibly for a rare lapse, will the likes of Wes Junker.

If you want more on this subject, you'll find all you want by Googling "meteorological cancer"

Posted by: SteveT-CapitalWeatherGang | December 28, 2010 3:25 PM | Report abuse

@kinkysr -

mason08 is right - - didn't your mother ever teach you not to lie??? What in heaven's name do you mean by saying such a thing? Clearly, you are just trying to get attention and make stuff up to make CWG and Bob Ryan and Doug Hill look bad. I guarantee you that they all know meteorology much better than you.

Posted by: BobMiller2 | December 28, 2010 3:39 PM | Report abuse

My objection isn’t to “probabilities” – it’s to misapplying probabilities to instances in which you have no testable basis to the values assigned. The use of probabilities to communicate uncertain information - see NOAA's HPC output - makes perfect sense when there is a scientific method for making those projections and some means for verification to measure past performance and improvement.

If you can articulate a method for arriving at your forecasted probabilities, then I have no complaints – otherwise, it’s a bit of a gimmick to be misused to argue every forecast was right because anything was always possible and predicted. I think you already cover uncertainty at great length in the forecast – if anything, so much so that it is difficult to discern what your forecast is. If the answer, as with the last storm, is that it is beyond your ability to predict, then you need say no more than what the range of possibilities are and you indicated whether one outcome appeared more or less likely, which seemed reasonable. However, describing it as an equal chance of anything from nothing to over a foot doesn’t clarify the situation or make your forecast somehow more precise. It only reflects your inability to predict the event, not the likelihood of the event.

I think you should always seek to set out a "most likely outcome," which can be qualified as “low confidence,” but it shouldn’t be contradicted by the rest of the forecast (since the most likely outcome isn't, by definition, a 30% chance of 3-6"). The most likely outcome should be supported by an overall theory of how the system will progress, and should change as information comes in - but saying the "probability" has quantifiably increased from 40% to 50% is unsupportable by any underlying science - and looks silly when it falls back then jumps up over the course of 3 hours.

There is a balance between reacting to the latest information and keeping consistency to the forecast – sometimes you get it right and sometimes you get it wrong. Hopefully if you get it wrong, the response shouldn’t be “I also told you I might not be right.”

Posted by: manatt | December 28, 2010 3:58 PM | Report abuse

There are multiple operational and ensemble models that do not always provide the same answer. When this is the case, you're going to see a lot of waffling on probabilities. If they're in agreement, you're going to see something like last winter's storms, when the snow klaxons were sounded 120 hours out and only grew in volume.

On a storm like this one, "Most likely outcome" predictions are meaningless, because the collection of inputs that would go into making such a determination are imprecise. Anyone who saw and properly interpreted the 25/30/25/20 graph they put up on Christmas day saw it for what it was: Surrender. There were reasonably large chances that any outcome could happen, and there was model support for a theory leading to every outcome. Sometimes that's just the way it is.

Of course, it could be this, too:

Posted by: mason08 | December 28, 2010 4:24 PM | Report abuse


You make a number of interesting/good points. I would remind, though, that weather forecasting is a combination of both art and science. There isn't necessarily a concrete method used to arrive at probabilities. There is, however, logical reasoning - based on model output, radar/satellite, past experience and past storms, etc. - and we do our best to explain that reasoning within the time/space constraints that exist in the busy time leading up to a potential storm.

One other note. You say, "There is a balance between reacting to the latest information and keeping consistency to the forecast." To be clear, when it comes to the forecasts we issue leading up to potential storms (and even during routine weather) we've always made a concerted effort to do just that - to maintain a consistent (but not necessarily unchanging) forecast in the face of new information. At the same time, we can't be so rigid as to ignore the possible implications of new information that may be trending in a different direction from old information. -Dan, CWG

Posted by: CapitalWeatherGang | December 28, 2010 4:30 PM | Report abuse


Good point. In almost all our pre-storm posts (when we do a map, timeline, etc.) we do try to put the confidence indicator and "How confident are you in your forecast?" FAQ before the jump. But even so, you are probably right that some people may make it no further down the page than the map. We'll think about ways to address this in the future, especially in cases when confidence is on the low side. -Dan, CWG

Posted by: CapitalWeatherGang | December 28, 2010 4:34 PM | Report abuse


What it boils down to is that when you predict a lot of snow (say 10 inches) and end up with 20 inches instead, the impact on the average person doesn't change all that much. Either way it's a lot of snow and causes a lot of problems. When you predict a few inches of snow and get next to nothing, the difference in impact is much bigger. A few inches is still enough to cause some problems and inconveniences, whereas a dusting has no real impact at all. It also works the other way. If you only predict a little snow (a few inches) and end up getting a lot (say 8 inches or more), then again you have a reality that is significantly more impactful than the forecast suggested. -Dan, CWG

Posted by: CapitalWeatherGang | December 28, 2010 4:41 PM | Report abuse


Going back to your first comment above, looking back we may have to tried to pin down a "most likely" on this storm one model run too early. It's hard because once you get to within 12-24 hours of the start of a storm, you really want to be able to start giving folks an idea of what the most likely result/impact is going to be. In a lot of cases you can do this with some degree of confidence, even though details are still likely to change. In hindsight, that wasn't the case this time. Problem was that by Saturday afternoon we finally had the NAM trending toward the GFS. But we didn't give enough weight to the fact that even though the NAM was finally giving the area some snow, the snow was going to be light and primarily spread out during the daytime hours Sunday. It's hard to get accumulation here during the day from light snow, especially if it doesn't start early enough in the morning to give you a base of accumulation to build on once the sun is up. Of course, it's easy to say and think all this in hindsight. -Dan, CWG

Posted by: CapitalWeatherGang | December 28, 2010 4:48 PM | Report abuse

The problem with most contributors to this blog, since a majority of us are snow-lovers, even if we see a 10% chance of 12" and 90% of a dusting, we're going to be disappointed when the 12" fails to materialize. That's when we start looking for scapegoats! Maybe CWG needs to show the confidence level in huge flashing icons above the forecast.
"Sometimes you know what you know, sometimes you know what you don't know and sometimes you just don't know."

Posted by: DOG3521 | December 28, 2010 4:52 PM | Report abuse

It's important to remember that estimates of the probability of snowfall amount (or any weather parameter) are themselves uncertain - sometimes as unpredictable as the basic weather scenario itself. Ensemble systems are guidance to be interpreted by forecasters in light of their known capabilities and limitations - just as the individual models must be appraised.

With that said, I agree that shifts in probabilities of 5, 10 and maybe even 20% in some situations cannot generally be justified given the current state of the art, except perhaps simply to convey a trend without taking the absolute values literally.

Thanks all for partaking in the discussion we aimed to promote ! Keep it coming

Posted by: SteveT-CapitalWeatherGang | December 28, 2010 5:49 PM | Report abuse

The comments to this entry are closed.

RSS Feed
Subscribe to The Post

© 2012 The Washington Post Company