Separate names with a comma.
If you recently registered and have not received a confirmation email - please check your 'Spam or Junk' folders. Especially if your email is Hotmail. More help with confirmation issues
NOTE: This notice may be closed.
Discussion in 'Alpine & Southern' started by Jellybeans, Aug 4, 2017.
Latest table from the last system completed.
Looks like the usual suspect is smoking weed again.
That would be correct.
I stopped doing the other one, because of my sickness last week.
This will be the final table for the season.
Love your work.
Fixed the post
This may be the wrong spot to ask it- but do we have an idea or record of the amount of snow that has fallen this season on the hills? Not what we record at Spencer's creek but actually what has fallen each and every week then added up for a cumulative amount at the end of season...
So we can say that "this year we had xxx amount of snow fall" etc...
Thredbo has a published cumulative season total (for what it's worth) of 473cm... from memory that mustn't be far off the total for the 2000 season.
I vaguely recall it around 480-490cms.
Hotham publishes one too. 466cm.
Here is the final table completed (sorry it took so long).
As stated, finished for this season.
Data is available for anybody to use.
I am going to make some sort of table, with the average accuracy of the forecasters at some point. But feel free to do what you want with it.
Nearly finished the rigorous* analysis of the data.
* not rigorous at all
Looking forward to this, is it my imagination, or has/was WZ quite on the money most of the time?
Before I start this, I should mention the data errors.
For starters, some of my estimations may be incorrect. I also haven't got the predictions from the exact same time for each system, due to the fluidness of weather forecasts.
Errors on part of the forecasters featured include 'worded predictions', which I have had to estimate or not include. Grasshopper what the hell does 'the snow keeps coming down' mean??????
Okay so this is the table based on 4 and 7 day predictions. I based my point system off how far the prediction was from the actual storm total at Perisher. For predictions that were a range (eg 10-20cm), I based my calculation on the median figure. I then created a points system, based on the bracketed figures.
If you want to declare winners, for the 7 day category the winners are Jane and EC. And for the 4 day category, the winners are the BOM and EC.
The point system isn't great, as I am comparing dusting systems and massive dumps. But this is the best I have. Cheers
Edit: Also thanks @The Plowking for the support throughout this process.
ok so we watch for Jane or yr.no to get excited and then wait for the BoM 4 day rule to apply
love your work!
Yeah....half arsed support at that...
Next year im going to post synoptics to match dates for you......assuming you have time to do it again.
Yeah that would be appreciated.
According to the stats, no.
It's back. The snow prediction accuracy thread: Round 2! Starting for the first June system. Working out the details, will probably keep the same format as last year. Can Jane, the BOM and EC keep their spots? tune in to find out. suggestions if there are any please.
You assume correctly, I might not be doing every system though (because of other commitments, hopefully chiefly skiing, but also my overseas Wx project). Wonder how Synoptics could work out?
Will think on it.
Get back to me sometime before the end of next month in that case.
It would have been interesting to do the big storm, but laziness and business got the best of me.
Here is the first one of 2018, looking like a dusting ATM.
This season I am basing the forecast on Perisher (as I did last year).
edit: two dustings in fact.
Finally the update for the above system... that was not to be.
And the current one
I assume mountainwatch gets their info from Grasshopper
Does anyone know where/how Snow-forecast.com gets their snowfall predictions (esp given they're worldwide)?
And snow-forecast.com gets their predictions from GFS (already in the table).
This was for the last system. Was that figure correct (getting conflicting numbers), got that from a snow depth number checking app?
Frog overstated a dump signal which flopped into a 10cm topup. What's new?
Forecasts: about to experience.
Great work, @Jellybeans1000.
Looks like Jane nailed it, even if it was via a very broad range.
Another year, another comparative predictions table....
Firstly the disclaimer from last year.
This year I examined 8 systems (6 for the 4 day prediction range):
29 Aug-2 Sept
They are roughly based around a 3-4 day and 6-7 day range. These are all based upon how far away the various predictions are from the storm total measured at Perisher. The median figure was used for ranges of figures. Points are based upon the bracketed numbers, with totals on the right. Here they are....
Based on those figures, 7 day category goes to EC/yr.no, runners up is it's loyal sidekick GFS.
4 day category goes to ACCESS-G (somewhat surprisedly), with joint runners up to GFS, EC and Jane.
This wouldn't be complete if we didn't combine them with last year's and see what we get! Gaps in data are not noted.
7 day category still has EC being consistently good, followed up by Jane. 4 day category has joint winners in EC and ACCESS-G, followed up again by Jane.
There you have it, another great season wrapping up. Thanks again to all
Nice analysis, thanks for taking the trouble
Love a statistical analysis! Just wondering if you could explain the raw data in the cells a little more, @Jellybeans1000 . For instance, does the figure in the cell represent the number of times a forecaster was (eg) 0-10cm off the total?
Yep that would be it. And then the amounts are put to the value in the brackets at the top, and added up to make a total. Higher values equal higher accuracy.
You see a lot of 0-10cms, which are mixes of good guesses of dumps and well forecasted dustings. And then there are a few 50cm+ ones, which represent the dumps that fizzed or the sudden sprung up system, with the former more likely. Then the in between ranges represent all the other scenarios with plenty 10-20cm off, and then decreasing as the distance away from the actual total increases, which I think is a good result IMO. And of course you see lower ranges in the 4 day, compared to the 7 day, so you'd think there would be higher scores there, but as mentioned, there is a lower amount of data in that category.
Does this help at all? More than happy to take any other questions
This project has ran for the past two years, so it's time to re-evaluate.
So do people think that it is useful?
Any suggestions? Constructive criticism?
Thanks in advance.
It is useful. Thanks for your work here, @Jellybeans
Hope you are doing better, after all that happened to you
I reckon it's great, but have near zero spare time to assist
I'm a fan .
Appreciate your efforts.
sorry I didnt give any feedback ,
got lost in the black hole
will have a good look later when I get a chance, but yeah good to see it confirms what we pretty much know.
That’s all good, no worries.
First time seeing this thread. I like it! My favorite course as a statistics grad student was called Descriptive Statistics. This is a great example of where applying common sense to numerical data can yield conclusions that are relevant.
Seeing the trend over time, meaning multiple years, is always more useful than just one data point.
I have a question - if a forecaster gives a 40cm range in their prediction how do you determine whether they have been more accurate than a forecaster who gives a precise number or a smaller range? (Apologies if this question has already been asked).
ETA - to clarify, I see you use a median number, but do you apply weightings of any kind to indicate accuracy in the range?
The way I run it is to purposefully not discriminate against the use of ranges. and single numbers. I as you have stated, use a median number for the ranges. I don't weight these, because the idea is that they are the most confident in the region around the median for the final term.
It's not an exact science, as opposed to the ranges by the Bureau which are 75% chance to 25% chance.
If you have any ideas to improve, feel free to suggest.
I'll have a think.
It appears from the two years of data that yr.no is hands down the most accurate and they quote a single number which I find more useful in a practical sense compared with a 40cm range.
@Jellybeans - would you mind PMing me the rest of the data tables from last season? I can only see the first four in the thread. I'm having a play with adding a 'discernment' factor based on the size of the range forecast.
GFS upgrade has gone live, it will be interesting to see if it makes a difference