NUST Institutional Repository

An Efficient Approach Towards Predicting Residential Energy Consumption Via Encoder-Decoder CNN-LSTM

Show simple item record

dc.contributor.author Iqbal, Shuaib
dc.date.accessioned 2023-08-07T10:26:44Z
dc.date.available 2023-08-07T10:26:44Z
dc.date.issued 2022
dc.identifier.other 276612
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/35743
dc.description Supervisor: Dr. Farhan Hussain en_US
dc.description.abstract In today's modern world, fast growth in the human population and technological advancements have drastically increased electricity consumption. Hence, Electricity consumption prediction has now become important in terms of improving power management and coordinating between power consumed in a building and the electricity grid. Electricity models are restricted in the scope of efficiently forecasting power usage, due to several constraints like climate change and the dynamic response of residents. In this work we target an electricity consumption prediction model that is more robust and performs better than the existing models. We proposed a deep learning model that utilizes the encode-decoder LSTM. This model consists of convolution neural network which act as an encoder. It’s comprised of two convolutional layers following by max-pooling layers. The very first convolutional layers read the inputs sequences and project the output sequence on feature maps. We read the input sequences using a kernel size of two time-steps and 64 feature mappings per convolutional layer by using “Relu” as an activation function. The max-pooling layer after the first Convolutional layer reduces feature maps by preserving one-fourth of the data with the maximum value. The next convolutional layer does the same procedure trying to magnify any salient features. The second max-pooling layers perform the same operations to reduce feature maps generated by second convolutional layers. The decoder is designed as a hidden layer of 100 units with “tanh” as an activation function. The decoder will then return the entire series with each of every 100 units providing a value for every 60 minutes to forecast what is going on in the output sequence at every minute. The feature maps after the pooling layers are flattened into a long vector that can be used as an input for the decoding phase. The encoder's fixed-length outputs are repeated once for timestep in the output sequence. This sequence is subsequently fed into an LSTM decoding model. LSTM is defined as a decoder that gives complete sequence as an output that has been fed into a dense layer for output prediction. Ultimately MSE (Mean Square Error), RMSE (Root Mean Square Error), MAE (Mean Absolute Error), and MAPE (Mean Absolute Percentage Error) are used to assess the efficiency of the model and the result show that the proposed mothed gives improved prediction results as compared to other traditional prediction models. en_US
dc.language.iso en en_US
dc.publisher College of Electrical & Mechanical Engineering (CEME), NUST en_US
dc.subject Keywords—Convolution Neural Networks, Artificial Intelligence, Long-Short Term Memory, Household Power Consumption, Encoder-decoder Model, Deep Learning en_US
dc.title An Efficient Approach Towards Predicting Residential Energy Consumption Via Encoder-Decoder CNN-LSTM en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [329]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account