The atmosphere is always moving. It’s infinitely complex. It’s a fascinating fluid that can provide meteorologists with amazement, humility, and a whole lot of fun! To figure out how the atmosphere will evolve over a given time we use computer modesl. In order to do this we need to take a snap shot of the atmosphere to give a that model a set of initial conditions.
Since it is impossible to sample every point (both in the horizontal and vertical) in the troposphere our initial conditions will never be perfect. Weather balloons, surface observations, and increasingly remote sensing do provide us with a good first guess! Errors in that first guess amplify with time due to the chaotic nature of the partial differential equations that the model solves. Other issues, such as the imprecise parameterization of variables, also lead to issues with forecast accuracy.
Ensemble modeling is a powerful tool that tries to alleviate the issues associated with the initializing a model. We do this by “tweaking” the initial conditions for each ensemble member and then run the model for each different member with different sets of (reasonable) initial conditions. When all the ensemble members are reasonably close with a 48-hour or 72-hour forecast, for instance, predictability is high and there’s a low sensitivity to small changes in the initial conditions. When ensemble members are all over the place, however, we can infer there is a high degree of sensitivity to the initial conditions and the forecast uncertainty is high.
Every storm is different. Some storms are well forecast and modeled and forecast confidence is high. Other times there is a huge spread in our ensembles and the forecast confidence is low. When ensemble members agree with their respective operational run things are in great shape!
Enter in the Monday night/Tuesday storm. If you were to look at the 15z Short Range Ensemble Forecast (SREF) members you’d be ready for a big snowstorm! Here’s a plot of the probability of >4″ of snow (the plot is created by looking at how many members produce >4″ of snow at a given grid point and assigning a percentage).
So… yeah… about that 70 percent chance! While the vast majority of SREF members are quite amped up and quite wet most of our other models are quite blah (except, of course, the NAM)! The 12z European, UKMet, GGEM, and GFS produce very little snow over Connecticut. Maybe an inch or two!
So what’s the deal? It’s important to understand the difference between the different models and the forcing behind what’s producing the precipitation. Let’s start with the synoptics.
Since the NAM is the most robust let’s look at why. First here’s pressure on the 2 PVU surface (dynamic tropopause). The orange lines are isotachs on the DT. The image below is 500mb absolute vorticity and the orange shading is upward 700-500mb vertical motion.
So what’s going on? We have a strong PV anomaly that’s heading toward the coast. The PV anomaly is strengthening as it heads east. The PV forcing is strongest off the east coast of New England. In addition, the low static stability thanks to the moist/mild air over the Ocean favors upward motion. Using a conceptual jet streak model we can also see forcing for lift. I marked the 2 jet streaks that have developed downstream of the PV anomaly. One over Nova Scotia and another south of Nantucket. The left exit region of the latter and right entrance region of the former are right over eastern Massachusetts and the adjacent Atlantic Ocean.
Here’s the 18z NAM QPF and 1000-500mb Thickness / SLP valid 12z Tuesday. Under that area of strong synoptic forcing an inverted trough/NORLUN signal has developed with a localized maximum in QPF.
A combination of strong synoptic-scale forcing and mesoscale forcing along that inverted trough result in heavy QPF values. So why do the other models not have this? While all the models are reasonably close with the synoptics they have dramatically different QPF forecasts.
Some of our models are referred to as global models and others as mesoscale models. The mesoscale models tend to have a finer resolution than their global counterparts and they’re also non-hydrostatic (i.e. don’t assume a balance between vertical pressure gradient force and gravity). The NAM and its ensembles (the SREF) also have different terrain and convective parameterization schemes designed to resolve mesoscale details.
So what are we to do? This event is driven by synoptic and mesoscale phenomenon. While this is true of all events it’s particularly true of these kind of setups where mesoscale boundaries are absolutely vital. We’ve seen storms like this where only miles separate 3″ of snow from 15″!
In general a few things to note about these inverted trough/NORLUN deals:
- I’d like to see more QPF from the global models. Stronger synoptic forcing and better inflow off the ocean would be a start.
- These things always seem to trend north and east with time! When models have something over central New Jersey 72-hours out more often than not it ends up over the north shore of Boston by verification time!
- Simple geography tells you that eastern New England is more vulnerable to these NORLUNs as they “stick” out to the east more. Pressures lower and the trough forms as the PV anomaly is able to interact with the low static stability over the ocean.
- Mesoscale models can hint at the potential for super-big totals but are notoriously poor with figuring out where exactly this will be. Global models are tend to catch on and show something.
- Where the NORLUN does setup it can dump! Normally the airmass is quite cold and many times the strongest omega is near the top of the boundary layer up through 700-ish mb intersecting with the dendritic growth zone.
This is something to watch over the next few model runs. While the NAM and SREFs are honking a big event I want to see the Euro jump on board. Until it does there’s no reason to play up the snow threat though these things are always a pain in the butt to forecast.