Why Target the First Difference Instead of the Actual Variable in Time Series Machine Learning Models?
I’m working on a time series prediction project using LSTM and ANN models, and I’ve come across advice suggesting that I should target the first difference of my time series data instead of the actual variable.
Not getting any predictions with higher window size in trained LSTM model
I want to make timeseries analysis on the given data, where the model will predict the next 24 timesteps by looking at the last n number of datapoints (window size). I wrote the code and run it without any problems, however when I increase the windows size from 12 to 120, I do not get any predictions. I only receive nan values from my loss functions. I would like to ask help on this issue.