Loading

Stimulating Deep Learning Network on Graphical Processing Unit To Predict Water Level
Neeru Singh1, Supriya P. Panda2

1Neeru Singh*, Computer science & Engineering department, Manav Rachna International Institute of Research and Studies, Faridabad, India.
2Dr. Supriya P Panda, Computer science & Engineering department, Manav Rachna International Institute of Research and Studies, Faridabad, India. 

Manuscript received on March 30, 2020. | Revised Manuscript received on April 05, 2020. | Manuscript published on April 30, 2020. | PP: 1222-1229 | Volume-9 Issue-4, April 2020. | Retrieval Number: D8452049420/2020©BEIESP | DOI: 10.35940/ijeat.D8452.049420
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Deep learning is widespread over different fields like health industries, voice recognition, image & video classification, real-time rendering applications, face recognition and many other domains too. Fundamentally Deep Learning is used due to the three different aspects. The first one is its ability to perform better with a huge amount of data for training, second is high computational speed, and third is the elevation of deep training at various levels of reflection and depiction. Acceleration of Deep Machine Learning requires a platform for immense performance; this needs accelerated hardware for training convoluted deep learning problems. While training large datasets on deep learning that takes hours, days, or weeks, accelerated hardware that decreased the overload of computation task can be used. The main attention of all the research studies is to optimize the results of predictions in terms of accuracy, error rate and execution time. Graphical Processing Unit (GPU) is one of the accelerated hardware that has currently prevailed to decrease the training time due to its parallel architecture. In this research paper, the multi-level or Deep Learning approach is simulated over Central Processing Unit (CPU) and GPU. Different research claims that GPUs deliver accurate results with a maximum rate of speed. MATLAB is the framework used in this work to train the Deep Learning network for predicting Ground Water Level using a dataset of three different parameters Temperature, Rainfall, and Water requirement. Thirteen year’s dataset of Faridabad District of Haryana from the year 2006 to 2018 is used to train, verify, test and analyzed the network over CPU and GPU. The training function used was the trailm for training the network over CPU and trainscg for GPU training as it does not support Jacobian training. From our results, it is concluded that for large dataset the accuracy of training increased with GPU and processing time for training is decreased when compared to CPU. Overall performance improves while training the network over GPU and suits to be a better method for predicting the Water Level. The proficiency estimation of the network shows the maximum regression value, least Mean Square Error (MSE), and high-performance value for GPU during the training.
Keywords: Deep Learning, Graphical Processing Unit (GPU), Central Processing Unit, Prediction, Artificial Neural Network.