Past predictions vs recorded data

It would be nice to have access to actual recorded measurements (temperature, precipitations etc) as well as the last few predictions from each model. Some use cases:

  • determine the local bias of predictions coming from a specific model by comparing them to recorded data over the past 72 hours
  • evaluate the accuracy of different models by comparing their past predictions with the actual measured values
  • understand how the accuracy of a model drops as we look farther into the future by comparing all the past predictions of the same model which cover the last 24h and noticing how more recent ones become more precise
1 Like

Hi @cgratie,

This has been suggested before and I’ve thought about it quite a bit. Essentially it’s a major undertaking,a totally different use case and significant complexity.

A majority of people use Flowx for forecasting - “What is the weather on the weekend?” Your suggestions are more analysis - “How accurate is the forecast?” This an entirely different research field and the results from this analysis is not going to be black and white. They are going to be fuzzy. Which leads to the next issue, people have to interpret the accuracy results and apply it to their forecasts.

There is a bit of technical work too. We have to collect/save measured data, save past forecast data and then add features to Flowx to compare and analyse this data. This is where I see this feature moving away from the strengths of Flowx, i.e., instead of doing one job very well, Flowx is trying to do two jobs well. And there is still much to do in Flowx for just plain forecasting.

If you’d like an example of how hard, fuzzy and confusing this is going to be, check out this presentation comparing GFS vs GFS FV3:

Have a read of the discussion points on the storm, hurricane and flooding, and also the conclusion. Then imagine doing all this work in Flowx and getting similar results to interpret and similar conclusions. That’s a boat load of work for a load of confusion and mild concussion.

Really, this accuracy analysis should be (and is usually) done offline using scripts (e.g., Python) since it gives more flexibility.

Ultimately you want more accurate predictions and there are other features that are more aligned with Flowx that can provide this. For example:

  • more models
  • ensemble models (check out hurricane tracks for this)
  • and custom MOS.

Cheers, Duane.


Thanks for your answer, Duane. You are totally right about the complexity of adding such accuracy analysis to Flowx. My idea was a lot lighter though, it was mainly about making the data available and letting the user browse a side by side view of predicted vs actual weather data. In my mind it would be somewhat similar to how I’m trying to use radar reflectivity on top of precipitation predictions to get a feel for how accurate the models are in my location.

Of course, I am aware that this use case of mine might be quite specific. It’s just that for me as a newcomer to Flowx the main strength I see in your app is the way you managed to find very effective ways to display a lot of data in one place in a very intuitive way, which just makes me wish for even more.


1 Like

Yeah, this is much simpler and this will come with time. I plan to quickly prototype two maps side-by-side in landscape mode. This will allow you to show anything you want on each. I’ll also consider keeping one, maybe two, prior forecasts for comparison - disk space is the main limitation here.