×

Learning optimization of neural networks used for MIMO applications based on multivariate functions decomposition. (English) Zbl 1237.93185

Summary: An approach based on multivariate function decomposition is presented with the aim of performing the learning optimization of multi-input-multi-output (MIMO) feed-forward Neural Networks (NNs). The proposed approach is mainly dedicated to all those cases in which the training set used for the learning process of MIMO NNs is sizeable and consequently the learning time and the computational cost are too large for an effective use of NNs. The basic idea of the presented approach comes from the fact that it is possible to approximate a multivariate function by means of a series containing only univariate functions. On the other hand, in many practical cases, the conversion of a MIMO NN into several multi-input-single-output (MISO) NNs is frequently adopted and investigated into state-of-the-art NNs. The proposed method introduces a further transformation, i.e. the decomposition of a MISO NN into a collection of single-input-single-output (SISO) NNs. This MISO-SISOs decomposition is performed using the previously cited series coming from the technique of the multivariate decomposition of functions. In this way, each SISO NN can be trained on each one-dimensional function returned from the cited decomposition, i.e. using limited data. Moreover, the present approach is easy to be implemented on a parallel architecture. In conclusion, the presented approach allows us to treat a MIMO NN as a collection of SISO NNs. Experimental results will be shown with the aim of proving that the proposed method is effective for a strong reduction of learning time by preserving anyway the accuracy.

MSC:

93E35 Stochastic learning and adaptive control
92B20 Neural networks for/in biological studies, artificial life and related topics
93B11 System structure simplification
PDF BibTeX XML Cite
Full Text: DOI

References:

[1] Lim KH, IEEE Trans. Circuits Syst. Express Briefs 56 (4) pp 305– (2009)
[2] Yalcin B, IEEE Trans Ind. Electron. 56 (8) pp 2933– (2009)
[3] Jianyo, L, Yongchun, L, Jianpeng, B, Xiaoyun, S and Aihua, L. 1–3 November 2009.Flaw Identification Based on Layered Multi-subnet Neural Networks, 1–3 November, 118–128. Tianjin, China: Proceedings of Second International Conference on Intelligent Networks and Intelligent Systems.
[4] Sun, A, Zhang, A and Wang, Y. 25–28 June 2006.Largescale Artificial Neural Network Owning Function Subnet, 25–28 June, 2465–2470. Luoyang, China: Proceedings of 2006 IEEE International Conference on Mechatronics and Automation.
[5] Haikun, W, Weiming, D and Sixin, X. 28 June to July 2 2000.Designing Neural Networks Based on Structure Decomposition, 28 June to July 2, 821–825. Hefei, P.R. China: Proceedings of the 3rd World Congress on Intelligent Control and Automation.
[6] Kabir H, IEEE Trans. Microwave Theory Tech. 58 (1) pp 145– (2010)
[7] Fiori S, Int. J. Neural Syst. 13 (2) pp 1– (2003)
[8] Huynh, HTrung and Won, Y. 30 November–2 December 2009.Training Single Hidden Layer Feedforward Neural Networks by Singular Value Decomposition, 30 November–2 December, 1300–1304. Seoul, Korea: Proceedings of 2009 Fourth International Conference on Computer Sciences and Convergence Information Technology.
[9] Rohani K, IEEE Trans. Neural Networks 3 (6) pp 1024– (1992)
[10] Bizzarri, F, Parodi, M and Storace, M.SVD-Based Approximations of Bivariate Functions, IEEE International Symposium on Circuits and Systems (ISCAS 2005), 23–26 May 2005, Vol. 5, pp. 4915–4918
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.