E3F9 Look ahead bias in performance measure? · Issue #25 · PYFTS/pyFTS · GitHub
[go: up one dir, main page]

Skip to content

Look ahead bias in performance measure? #25

@wangtieqiao

Description

@wangtieqiao

I tried run the notebook Chen - ConventionalFTS.ipynb and saw the good results, but I am bit skeptical it is so good... so follow the logic of bchmk.sliding_window_benchmarks code, as well as the plots you are showing, I feel you might comparing predicted T+1 timeseries with current T timeseries,

Be more specific: given test data length L, T[0,1,...L], the model.predict produce a T+1 value for each value supplied. the prediction has same length as the given test data.

However, when you plot, or check the performance, you can not directly compare testdata vs predicted. e.g. in your code you compute Measures.py line 396, rmse(datum, forecast)

The correct measure might should be rmse(dataum[1:], forecast[:-1])

Also for you notebook plot, if you shift the prediction with -1 steps, u will see different plot. It will be similar to most of the timeseries models that show prediction = lag(1) + noise which we hope to overcome.

Let me know if I might misunderstood the code/loigc ...


BTW, nice work. I am still trying it out, hope can prove that I could use it in my project...

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0