You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to be able to compare whether one forecast is statistically better than another.
Describe your proposed solution
Under certain conditions, the Diebold-Mariano test achieves this. There's an example in Python here.
Describe alternatives you've considered, if relevant
I'm not sure there are alternatives to this.
Additional context
In time series forecasting, we often want to know which forecast performs better. This test puts the difference in performance on a firm statistical footing.
The text was updated successfully, but these errors were encountered:
My personal take on this, is that this is unlikely to happen in scikit-learn so I am going to close this. More than happy to hear other opinions on this!
A few reasons to explain why I think this should be closed:
full disclosure: I have never heard of it before, but it seems like this is more specific to forecasting time-series prediction which I would say is a bit outside of the main scikit-learn core focus
the fact that you have asked pingouin to consider adding this feature and that the response from someone that seems to know this kind of test a lot better than me is that "The DM test is also very specific in a way" seems to imply that scikit-learn is not a good place for this see Diebold-Mariano test / time series and forecasting tests raphaelvallat/pingouin#434 (comment)
people tend to underestimate how much work is needed to add something new in scikit-learn (or similarly sized projects). In scikit-learn, there are currently 480+ issues opened with the label "New feature" and I would say most of them have a very small chance of ever being worked on, let alone being merged one day ...
I would encourage you to see if there is not a better package in the ecosystem which would make more sense to host this kind of functionality. Not sure whether that is useful for your use case but I found one project on PyPI https://pypi.org/project/dieboldmariano that maybe worth looking at. The repo you point to has some Python 2 specific code in their README which you know is generally not a great sign.
Describe the workflow you want to enable
I would like to be able to compare whether one forecast is statistically better than another.
Describe your proposed solution
Under certain conditions, the Diebold-Mariano test achieves this. There's an example in Python here.
Describe alternatives you've considered, if relevant
I'm not sure there are alternatives to this.
Additional context
In time series forecasting, we often want to know which forecast performs better. This test puts the difference in performance on a firm statistical footing.
The text was updated successfully, but these errors were encountered: