Compare current model run to previous

I live in the Pacific NW where model runs drastically vary every time. Can be precip on all the models, next run, no precip, next run back to rain…next…no rain.

Is there the ability, or could you add, to save X numbers of previous model runs data and lay them out on a screen similar to the ‘compare’ that you currently have? If you want to involve maths… it would be great to show a ‘percent of reliability’ by comparing X number of previous run models for a location. Have it see that 7 out of 8 models are not at all similar and give it a low %, vs places that have more stable forecasting you could easily see a higher percentage of reliability.

This would help a lot. Especially when exploring or visiting areas that you don’t know well as being an area with unpredictable forecasting.

Even if you take out the extra work of adding computational comparing and percentages… just being able to select a location to have the last 5…10…40 model runs saved so you can visually skim the graphs and see, “this location sucks for predictions… bring extra layers and a raincoat and expect the worst”. A lot of apps have historical data… but I don’t know of any that have all the models of Flowx and could easily just present this data by saving such a tiny bit of data and showing it with the graphing code that is already well written. Seems like it would take very little work and be quite rewarding.

Regards,

-c

I have considered this before but it is damn difficult, probably 9/10 difficult.

First, you’d have to store all that data on the phone. This will be massive so I’d have to add compression and on-the-fly decompression which will slow down loading.

Given you have multiple places, multiple models, multiple zoom levels, and multiple saved model runs, the data space required would increase to the power of 4. This would be crazy. So you’d need to have a toggle to limit it to certain places and models > more code, more surface area for bugs.

Second problem, what if you don’t download a certain model run, then you’d have to download it off the server, there fore, I’d have to increase my server data space by the number of past model runs.

“Percentage reliability” is not a simple calculation because older models are less reliable so how do you weight them. This is a whole area of meteorology, check out Ch 20.7:

Consider the current model run shows rain at 3pm, but the previous two model runs predicts rain at 5pm and 4pm. Both previous model runs will not show rain, so you’d say they were wrong when in fact they did predict the same rain but delayed where one was more correct than the other.

The problem arises in how to associate the correct “rain cloud” in one model run with others.

And when I don’t know how valuable or how much this will be used, it would suck to put in so much work only to have it not be worth it.

I spent one month adding code for eclipse data (twice) only to have the feature not be used much. It was not worth it. I would hope I’ve learnt my lesson.

That said, I do keep the current and previous model runs on the server. In the future, you might be able to compare the two by switching between the models.

1 Like

What about taking the whole computational part of it.

Just give a 1 time toggle to save the last x runs of models for a selected (or the current selected) location? (It would still take a little hit on the server, but it would be something toggled run 1 time. The only purpose would be to investigate an area a little better. Not a toggle-on-forever option that people would forget and just waste data. Check it once. It saves a half dozen to a dozen runs just one single time. I think that would accomplish most of it.

Working through reading your citation. Thanks.

The server will have the last two runs on it so it’s possible to toggle between the two runs.

Downloading and storing the last X runs on the phone will be an issue because if you miss a run or only download some tiles, for whatever reason, then rendered map will have missing tiles which will degrade the experience.

I can hear you say “That is fine, I happy with that. It’s better than nothing” but other users won’t see it that way. I can hear them now “It’s terrible. What’s the point of this comparison if there’s missing data!!”.

I have been burnt too many times, users ask for an inch, I give an inch and others complain it’s not a mile.

Also there is still no evidence this will be used by many people. A lot of work for something that might not be used much compared to other features I could work on.

4 Likes