Research Article

An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

by  Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 39 - Issue 8
Published: February 2012
Authors: Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar
10.5120/4837-7097
PDF

Sudarshan Nandy, Achintya Das, Partha Pratim Sarkar . An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence. International Journal of Computer Applications. 39, 8 (February 2012), 1-7. DOI=10.5120/4837-7097

                        @article{ 10.5120/4837-7097,
                        author  = { Sudarshan Nandy,Achintya Das,Partha Pratim Sarkar },
                        title   = { An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence },
                        journal = { International Journal of Computer Applications },
                        year    = { 2012 },
                        volume  = { 39 },
                        number  = { 8 },
                        pages   = { 1-7 },
                        doi     = { 10.5120/4837-7097 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2012
                        %A Sudarshan Nandy
                        %A Achintya Das
                        %A Partha Pratim Sarkar
                        %T An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence%T 
                        %J International Journal of Computer Applications
                        %V 39
                        %N 8
                        %P 1-7
                        %R 10.5120/4837-7097
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent back-propagation algorithm. In the system, optimization is carried out using multilayer neural network. The efficacy of the proposed method is observed during the training period as it converges quickly for the dataset used in test. The requirement of memory for computing the steps of algorithm is also analyzed.

References
  • D.E. Rumelhart, G.E. Hinton and R.J. Williams, (1986) “Learning Representation by Back-propagation Errors”, Nature, vol. 323, PP. 533 – 536.
  • R.A. Jacobs, 1988, “Increased rate of convergence Through Learning Rate Application” , Neural Networks, vol. 1, no. 4, pp. 295-308.
  • Bello, M.G. , 1994, “Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks”,IEEE Trans. Neural Netw., vol.5., no.6, pp. 989-993.
  • Samad, T., 1990, “Back-propagation improvements based on heuristic arguments”, Proceedings of International Joint Conference on Neural Networks, Washington, 1, pp. 565-568.
  • Sperduti, A. & Starita, A. , 1993, “Speed up learning and network optimization with Extended Back-propagation”, Neural Networks, 6, pp. 365-383.
  • Van Ooten A. , Nienhuis B, 1992, “Improving the convergence of the back-propagation algorithm”, Neural Networks, 5, pp. 465-471.
  • C. Charalambous, 1992, “Conjugate gradient algorithm for efficent training of neural networks”, IEEE Procedings-G , vol. 139, 3.
  • Levenberg, K., 1944, “A method for the solution of certain problem in least squares”, Quart. Appl. Math., 2, pp.164-168.
  • Marquardt , D. , 1963, “ An algorithm for least sqare estimation of nonlinear parameters”, SIAM J. Appl. Math., 11, pp. 431-441.
  • M. T. Hagan and M. B. Menhaj, 1994, “Training feedforward networks with the Marquardt algorithm,” IEEE Trans. Neural Netw., vol. 5, no. 6, pp.989–993.
  • G. Lera and M. Pinzolas, Sep. 2002, “Neighborhood based Levenberg–Marquardt algorithm for neural network training,” IEEE Trans. Neural Netw.,vol. 13, no. 5, pp. 1200–1203.
  • Saman R., Bryan A. T., 2011, “ A new formulation for feed forward Neural Networks”, IEEE Trans. Neural Netw., vol.22, 10, pp. 1588- 1598.
  • Bogdan M. Wilamowski, Serdar Iplikci, Okyay Kaynak, M. Onder Efe, 2001, “An algorithm for fast convergence in training a Neural Network”, IEEE Proceedings, pp. 1778-1782.
  • Balakrishnan, K. and Honavar, V. (1992). “Improving convergence of back propagation by handling flat-spots in the output layer”, Proceedings of Second International Conference on Artificial Neural Networks, pp. 1-20.
  • Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, Cemal Ardil, 2003, “Levenberg-Marquardt Algorithm for Karachi Stock Exchange share rate Forecastin”, World Academy of Science, Engineering & Technology,vol.3,41, pp. 171-177.
  • UCI Machine Learning Repository : Iris Data Set - http://archive.ics.uci.edu/ml/datasets/Iris
  • UCI Machine Learning Repository : Wine Data Set - http://archive.ics.uci.edu/ml/datasets/Wine
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Back-propagation Neural Network Numerical optimization Fast convergence algorithm

Powered by PhDFocusTM