Comment by binsquare

1 day ago

I find it interesting that they quantify the improvement on speed and number of forecast-ed scenarios but lack details on how it results in improved accuracy of the forecast per:

``` WeatherNext 2 can generate forecasts 8x faster and with resolution up to 1-hour. This breakthrough is enabled by a new model that can provide hundreds of possible scenarios. ```

As an end user, all I care is that there's one accurate forecasted scenario.

This is really important: You're not the end user of this product. These types of models are not built for laypeople to access them. You're an end user of a product that may use and process this data, but the CRPS scorecard, for example, should mean nothing to you. This is specifically addressing an under-dispersion problem in traditional ensemble models, due to a limited number (~50) and limited set of perturbed initial conditions (and the fact that those perturbations do very poorly at capturing true uncertainty).

Again, you, as an end user, don't need to know any of that. The CRPS scorecard is a very specific measure of error. I don't expect them to reveal the technical details of the model, but an industry expert instantly knows what WeatherBench[1] is, the code it runs, the data it uses, and how that CRPS scorecard was generated.

By having better dispersed ensemble forecasts, we can more quickly address observation gaps that may be needed to better solidify certain patterns or outcomes, which will lead to more accurate deterministic forecasts (aka the ones you get on your phone). These are a piece of the puzzle, though, and not one that you will ever actually encounter as a layperson.

[1]: https://sites.research.google/gr/weatherbench/

  • Sorry to hijack you: I have some questions regarding current weather models:

    I am personally not interested in predicting the weather as end users expect it, rather I am interested in representative evolutions of wind patterns. I.e. specify some location (say somewhere in the North Sea, or perhaps on mainland Western Europe), and a date (say Nov 12) without specifying a year, and would like to have the wind patterns at different heights for that location say for half an hour. Basically running with different seeds, I want to have representative evolutions of the wind vector field (without specifying starting conditions, other than location and date, i.e. NO prior weather).

    Are there any ML models capable of delivering realistic and representative wind gust models?

    (The context is structural stability analysis of hypothetical megastructures)

    • I mean - you don't need any ML for that. Just go grab random samples from a ~30 day window centered on your day of interest over the region of interest from a reanalysis product like ERA5. If the duration of ERA5 isn't sufficient (e.g. you wouldn't expect on average to see events with a >100 year return period given the limited temporal extent of the dataset) then you could take one step further and pull from an equilibrium climate model simulation - some of these are published as part of the CMIP inter-comparison, or you could go to special-built ensembles like the CESM LENS [1]. You could also use a generative climate downscaling model like NVIDIA's Climate-in-a-bottle, but that's almost certainly overkill for your application.

      [1]: https://www.cesm.ucar.edu/community-projects/lens

  • > By having better dispersed ensemble forecasts, we can more quickly address observation gaps that may be needed to better solidify certain patterns or outcomes, which will lead to more accurate deterministic forecasts.

    Sorry - not sure this is a reasonable take-away. The models here are all still initialized from analysis performed by ECMWF; Google is not running an in-house data assimilation product for this. So there's no feedback mechanism between ensemble spread/uncertainty and the observation itself in this stack. The output of this system could be interrogated using something like Ensemble Sensitivity Analysis, but there's nothing novel about that and we can do that with existing ensemble forecast systems.

For lay-users they could have explained that better. I think they may not have completely uninformed users in mind for this page though.

Developing an ensemble of possible scenarios has been the central insight of weather forecasting since the 1960s when Edward Lorenz discovered that tiny differences in initial conditions can grow exponentially (the "butterfly effect"). Since they could really do it in the 90s, all competitive forecasts are based on these ensemble models.

When you hear "a 70% chance of rain," it more or less means "there was rain in 70 of the 100 scenarios we ran."[0] There is no "single accurate forecast scenario."

[0] Acknowledging this dramatically oversimplifies the models and the location where the rain could occur.

  • My understanding is that it's an expected value based on coverage in each of the ensemble scenarios, not quite as simplified as "how many scenarios was there rain in this forecast cell".

    At least for the US NWS: if 30 of 100 scenarios result in 50% shower coverage, and 70 out of 100 result in 0%, this is reported as 15% chance of rain. Which is exactly the same as 15 with 100% coverage and 85 with 0% coverage, or 100 with 15% coverage.

    Understanding this, and digging further into the forecast, gives a better sense of whether you're likely to encounter widespread rainfall or spotty rainfall in your local area.

Indeed. The most important benchmark is accuracy and how well it stacks up against existing physics-based models like GFS or ECMWF.

Sure, those big physics-based models are very computationally intensive (national weather bureaus run them on sizeable HPC clusters), but you only need to run them every few hours in a central location and then distribute the outputs online. It's not like every forecaster in a country needs to run a model, they just need online access to the outputs. Even if they could run the models themselves, they would still need the mountains of raw observation data that feeds the models (weather stations, satellite imagery, radars, wind profilers...). And these are usually distributed by... the national weather bureau of that country. So the weather bureau might as well do the number crunching as well and distribute that.

> I find it interesting that they quantify the improvement on speed and number of forecast-ed scenarios but lack details on how it results in improved accuracy of the forecast per:

Definitely. Training on the historical data creates compelling forecasts but it comes off as a magic box. Where are the missing physics for the high performance cluster?

As others have explained, ensembles are useful.

As a layperson, what _is_ useful is to look at the difference between models. My long range favourite is to compare ECMWF and GFS27 and if the deviation is high (windy app has this) then you can bet that at least one of them is likely wrong

They integrated "MetNet-3" into Google products and my personal perception was accuracy decreased.