top of page

Deep learning; Traffic speed analysis by using recurrent neural networks(LSTMs)

  • Writer: Karthik Jamalpur
    Karthik Jamalpur
  • Nov 28, 2021
  • 2 min read

Updated: Dec 6, 2021






This project is aimed to use long short-term memory networks to do short-term traffic speed prediction. I used 4 LSTM models(time steps, memory between batches, window method, stacked LSTMs with memory between batches ) in Keras technology. Evaluated the best model by using error indicator RMSE in the python programming language.


let's begin the story;


Neural networks:

Recurrent neural networks can handle sequence dependency (past and present values correlation) in a beautiful way. The Long Short-Term Memory network (LSTM) is used in deep learning because it can work on huge data architectures successfully (training of data). These values use the backpropagation technique to train data through time. The LSTM network works on memory blocks that are connected through layers. It creates large recurrent networks that in turn can be used to address difficult sequence problems and trains data randomly (epochs) to get better results when predicting. I used the Keras package to do the forecast.




LSTM methods




LSTM for memory b/w batches: -


Long sequences can recognize by the LSTM memory. It resets the training set when fitting the model to take advantage of that internal state of the LSTM network. It means it can build a state over the long sequence and even maintain that state if needed to make predictions and it requires setting training data not to be shuffled.



LSTM for regression with time steps: -


Data preparation for the LSTM network includes time steps in this method and it has the same sequence problems may have a varied number of time steps per sample. In this project let’s say a specific speed value reached in a specific time period (maximum speed or minimum speed in dataset). Each value would be a sample and observation that lead up to the event would be the time steps and variables observed would be the features. Instead of using past observations as separate input features, we use prior time steps to predict the next time step, which is more accurate in framing the problem.


LSTM for regression using window method: -


Window signifies the recent and multiple time steps for each phase and the window size is the parameter that can be altered for each problem.

For instance, given the current time (t), to predict the value at the next time in the sequence (t+1), we can use the current time (t), as well as the two prior times (Look back function) as input variables. When phrased as a regression problem, the input variables are t-2, t-1, t and the output variable is t+1.


Stacked LSTM with memory between batches: -


The LSTMs can be successfully trained when stacked into deep network architectures. These can be stacked in Keras in the same way that other layer types can be stacked. One addition to the configuration that is required is that an LSTM layer prior to each subsequent LSTM layer must return the sequence.




 
 
 

Recent Posts

See All

Commentaires


©2021 by KARTHIK JAMALPUR. Proudly created with Wix.com

bottom of page