Irodalom

[Aar89] Aart, E. - Korst, J. “Simulated Annealing and Boltzmann Machines”, John Wiley, New York, 1989.

[Abu90] Abu-Mostafa,Y. S. “Learning from Hints in Neural Networks”, Journal of Complexity, Vol. 6. pp. 192-198, 1990.

[Abu95] Yaser S. Abu-Mostafa, “Hints” Neural Computation, Vol. 7. pp. 639-671, 1995.

[Aga01] Agarwal, R. P. – Meehan, M – O’Regan, D. “Fixed Point Theory and Applications”, Cambridge University Press, 2001.

[Aka69] Akaike, H. “Fitting Autoregressive Models for Prediction”, Ann. Ins. Stat. Math. Vol. 21. pp. 243-347.

[Aka74] Akaike, H. "A New Look at the Statistical Model Identification", IEEE Trans. on Automatic Control, AC-19. No. 6. pp. 716-723. 1974.

[Alb75] Albus, J. S. "A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)", Transaction of the ASME, Sep. 1975. pp. 220-227.

[Ama95] Amari, S. - Cichocki, A. - Yang, H. H. "Recurrant Networks for Blind Separation of Sources", Proc. of the International Symposium on Nonlinear Theory and its Applications, NOLTA-95, Las Vegas, USA, pp. 37-42. 1995.

[Ama97] Amari, S. - Chen, T. - Cichocki, A. "Stability Analysis of Learning Algoritms for Blind Source Separation", Neural Networks, Vol. 10. No. 8. pp. 1345-1351. 1997.

[Ama99] Amari, S.- Wu, S. ”Improving support vector machine classifiers by modifying kernel functions”, Neural Networks, Vol. 12. pp. 783-789. 1999.

[Bab89] Baba, N. "A New Approach for Finding the Global Minimum of Error Function of Neural Networks", Neural Networks, Vol. 2. pp. 367-373. 1989.

[Bac91] Back, A. D. - Tsoi, A. C. "FIR and IIR Synapses, a New Neural Network Architecture for Time Series Modeling", Neural Computation, Vol. 3. pp. 375-385. 1991.

[Bal89] Baldi, P. - Hornik, K. "Neural Networks and Principal Component Analysis: Learning from Examples Without Local Minima", Neural Networks, Vol. 2. No. 1. pp. 53-58. 1989.

[Bal00] Balestrino, A. - Caiti, A. "Approximation of Hammerstein/Wiener Dynamic Models,"  IEEE International Joint Conference on Neural Networks IJCNN'00, Vol. 1,  pp. 1070-1074., 2000.

[Bar93] Barron, A. R. "Universal Approximation Bounds for Superposition of Sigmoidal Functions", IEEE Trans. on Information Theory, Vol. 39. No. 3. pp. 930-945. 1993.

[Bat92] Battiti, R. "First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method", Neural Computation, Vol. 4. pp. 141 166. 1992.

[Bau89] Baum, E. B. - Haussler, D. "What Size Net Gives Valid Generalization?", Neural Computation, Vol. 1. No. 1. pp. 151-160. 1989.

[Bau01] Baudat G. - Anouar, F. “Kernel-based methods and function approximation”. International Joint Conference on Neural Networks, pp. 1244–1249, Washington, DC, 2001.

[Bea94] Beaufays, F. - Wan, E. "Relating Real-Time Backpropagation and Backpropagation-Through-Time: An Application of Flow Graph Interreciprocity", Neural Computation, Vol. 6. pp. 296-306. 1994.

[Bek93] Bekey, G. - Goldberg, K. Y. “Neural Networks in Robotics”, Kluwer Academic Publisher, Boston 1993.

[Bel95] Bell, A. - Sejnowsky, T. "Blind Separation and Blind Deconvolution: an Information/ Theoretic Approach", Proc. of the 1995 IEEE International Conference on Acoustics Speech and Signal Processing, Detroit, USA, pp. 3415-3418. 1995.

[Ber01] Berényi, P. - Horváth, G. - Pataki, B. – Strausz, Gy. "Hybrid-Neural Modeling of a Complex Industrial Process" Proc. of the IEEE Instrumentation and Measurement Technology Conference, IMTC'2001. Budapest, Vol. III. pp.1424-1429. 2001.

[Bis95] Bishop, C. M.: "Neural Networks for Pattern Recognition", Clarendon Press, Oxford, 1995.

[Bis06] Bishop, C. M.: "Pattern Recognition and Machine Learning ", Springer, 2006.

[Bos92] Boser, B. - Guyon, I. - Vapnik, V.”A Training Algorithm for Optimal Margin Classifier” Proc. of the 5th Annual ACM Workshop on Computational Learning Theory, pp. 144-152. 1992.

[Bos97] Bossley, K. M. "Neurofuzzy Modelling Approaches in System Identification", Ph.D Thesis, University of Southampton, 1997.

[Bou95] Boutayeb, M., - Darouach, M. "Recursive Identification Method for MISO Wiener-Hammerstein Model", Automatica, Vol. 40. No. 2. pp. 287-291. 1995.

[Boy04] Boyd, S. - Vandenberghe, L. “Convex Optimization”, Cambridge University Press, 2004.

[Bre84] Breiman, L. - Friedman, J. H. - Olshen, R. A. - Stone, C. J. ”Classification and Regression Trees”, Technical Report, Wadsworth International, Monterey, CA. 1984.

[Bre96] Breiman, L. “Bagging Predictors”, Machine Learning, Vol. 24. No. 2, pp. 123-140, 1996.

[Bro93] Brown, M. - Harris, C. J. - Parks, P. "The Interpolation Capability of the Binary CMAC", Neural Networks, Vol. 6. No. 3. pp. 429-440. 1993.

[Bro94] Brown, M. - Harris, C. "Neurofuzzy Adaptive Modelling and Control", Prentice Hall, New York, 1994.

[Bru90] Bruck, J. "On the Convergence Properties of the Hopfield Model", Proceedings of the IEEE, Vol. 78. No. 10. pp. 1579 1585. 1990.

[Bry69] Bryson, A. E. and Ho, Y.C. „Applied Optimal Control”, Blaisdell, New York, 1969.

[Car89] Carroll, S. M. - Dickinson, B. W. „Construction of Neural Nets Using the Radon Transform” Proc. of the Int. Conference on Neural Networks, Vol. I, pp. 607-611. New York, 1989.

[Car94] Cardoso, J. F. - Laheld, B. "Equivariant Adaptive Source Separation", IEEE Trans. on Signal Processing, 1994.

[Caw02] Cawley, G. C. - Talbot, N. L. C. ”Reduced Rank Kernel Ridge Regression” Neural Processing Letters, Vol. 16. No. 3. pp. 293-302, 2002.

[Cha87] Chan, L. W. - Fallside, F. "An Adaptive Training Algorithm for Backpropagation Networks", Computer Speech and Language, Vol. 2. pp. 205-218. 1987.

[Cha02] Chapelle, O. - Vapnik, V. - Bousquet, I. - Mukherjee, S.”Choosing multiple Parameters for Support Vector Machines”, Machine Learning, Vol. 46. No. 1. pp. 131-159. 2002.

[Cha06] Chapelle, O. – Schölkopf, B. – Zien, A.”Semi-Supervised Learning”, The MIT Press, Cambridge, MA. 2006.

[Che89] Chen, S. - Billings, S. A. - Luo, W. „Orthogonal Least Squares Methods and their Application to Non-linear System Identification”, International Journal of Control, Vol. 50. pp. 1873-1896, 1989.

[Che91] Chen, S. - Cowan, C. - Grant, P. „Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks”, IEEE Trans. on Neural Networks, Vol. 2. pp. 302-309, 1991.

[Che04] Chen, B., Chang, M., Lin, C., „Load Forecasting Using Support Vector Machines: A Study on EUNITE Competition 2001”, IEEE Transactions on Power Systems, Vol. 19, No. 4, pp. 1821-1830. 2004.

[Cho97] Cho, S. - Cho, Y. - Yoo, S. "Reliable Roll Force Prediction in Cold Mill Using Multiple Neural Networks", IEEE Trans. on Neural Networks, Vol. 8. No. 4. pp. 874-882. 1997.

[Cho98] Choi, S. - Cichocki, A. - Amari, S. "Flexible Independent Component Analysis," Proc. of the IEEE Workshop on Neural Networks for Signal Processing, Cambridge, England, 31. August - 3. September, 1998.

[Cic93] Cichocki, A. - Unbehaunen, R. "Neural Networks for Optimization and Signal Processing", John Wiley & Sons, New York, 1993.

[Ciz04] Cizek, P., "Asymptotics of least trimmed squares regression", Discussion Paper 72, Tilburg University, Center for Economic Research. 2004.

[Com94] Comon, P. "Independent Component Analysis, A New Concept?", Signal Processing, Vol. 36. No. 2. pp. 287-314. 1994.

[Cov65] Cover, T. M "Geometrical and Statistical Properties of Systems of Linear Inequalities with Application in Pattern Recognition", IEEE Trans. on Electronic Computers, EC-14. pp. 326-334. 1965.

[Cra96] Craven, M. W. “Extracting Comprehensible Models from Trained Neural Networks” Ph.D. thesis, University of Wisconsin, Madison, USA, 1996.

[Cri99] Cristianini, N. - Campbell, C. - Shawe-Taylor, J. ”Dynamically Adapting Kernels in Support Vector Machines” in: Kearns M., Solla S., Cohn D., (eds.) Advances in Neural Information Processing Systems (NIPS) Vol. 11, MIT Press. 1999,

[Cri00] Cristianini, N. - Shawe-Taylor J. “Support Vector Machines and Other Kernel-based Learning Methods, Cambridge University Press, 2000.

[Cyb89] Cybenko, G. “Approximation by Superpositions of a Sigmoidal Function”, Mathematical Control Signals Systems, Vol. 2. pp. 303-314, 1989.

[DeC02] DeCoste, D. - Schölkopf, B. “Training Invariant Support Vector Machines” Machine Learning, Vol. 46. pp. 161-190. 2002.

[Dei86] Deistler, M. “Linear Dynamic Errors-in-Variables Models”, Journal of applied Probability, Vol. 23. pp. 23-39. 1986.

[Dem77] Dempster, A. P. - Laird, N. M. - Rubin, D. B. “Maximum Likelihood from Incomplete Data via the EM Algorithm (with discussion)”, Journal of the Royal Statistical Society (Series B), Vol. 39. No. 1. pp. 1-38, 1977.

[Dia96] Diamantaras, K. I .- Kung, S. Y. "Principal Component Neural Networks Theory and Applications", John Wiley and Sons, New York. 1996.

[Dre05] Dreyfus, G. “Neural Networks. Methodology and Applications”, Springer, 2005.

[Dru93] Drucker, H. - Schapire, R. E. - Simard, P. ”Boosting Performance in Neural Networks”, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 7. No. 4, pp. 705-719. 1993.

[Dru99] Druckner, H., Wu, D., Vapnik, V., „Support Vector Machines for Spam Categorization”, IEEE Transactions on Neural Networks, Vol. 10, No. 5, pp. 1048-1054. 1999.

[Dun93a] Dunay, R. - Horváth, G. "Modified CMAC Neural Network Architectures for Nonlinear Dynamic System Modelling", Proceedings of the International Conference on Artificial Neural Networks (ICANN 93), Amsterdam, Holland, p. 473. 1993.

[Dun93b] Dunay, R. - Pataki, B. - Horváth, G. "Some Further Possibilities of the Application of Neural Networks for Nonlinear Dynamic System Modelling", Proceedings of the First European Congress on Fuzzy and Intelligent Technologies, EUFIT 93, Aachen, Germany, Vol. 2. pp.930-936. 1993.

[Egm02] Egmont-Petersen M. - de Ridder D. - Handels H. "Image Processing With Neural Network – A Review ", Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002,

[Eld97] Eldracher, M. - Staller, A. - Pompl, R. „Adaptive Encoding Strongly Improves Function Approximation with CMAC”, Neural Computation, Vol. 9. pp. 403-417. 1997.

[Ell91] Ellison, D.: "On the Convergence of the Multidimensional Albus Perceptron", The International Journal of Robotics Research, Vol. 10. No. 4. pp. 338-357. 1991.

[Far90] Farrell, J. A. - Michel, A. N. "A Synthesis Procedure for Hopfield's Continuous-Time Associative Memory", IEEE Trans. on Circuits and Systems, Vol. 37. No.7. pp. 877 884. 1990.

[Fle88] Fletcher, R. “Practical Methods of Optimization”, John Wiley, 1988.

[Föl89] Földiák, P. "Adaptive Network for Optimal Linear Feature Extraction" Proc. of the 1989 Intnt. Joint Conference on Neural Networks, Vol. 1. pp. 401-405. 1989.

[Fre95] Freund, Y. ”Boosting a Weak Learning Algorithm by Majority” Information and Computation, Vol. 121. No. 2. pp. 256-285, 1995.

[Fre97] Freund Y. - Schapire, R. E. „A Decision-Theoretic Generalization of On-line Learning and its Application to Boosting”, Journal ofComputer and System Sciences, Vol. 55. No. 1. pp. 119-139, 1997.

[Fuk88] Fukushima, K. "Neocognitron: A Hierarchical Neural Network Capable of Pattern Recognition", Neural Networks, Vol. 1. No. 2. pp. 119-130. 1988.

[Fun89] Funahashi, K. I. "On the Approximate Realization of Continuous Mappings by Neural Networks", Neural Networks, Vol. 2. No. 3. pp. 183-192. 1989.

[Fun03] Fung, G. - Mangasarian, O. L. - Shavlik, J. W. “Knowledge-Bases Support Vector Machine Classifiers” In: S. Becker, S. Thrun, K. Obermayer (eds.) Advances in Neural Information Processing Systems, Vol. 15, pp. 521-528. MIT Press, Cambridge, MA. USA. 2003.

[Gal89] Galar, R. "Evolutionary search with soft selection", Biological Cybernetics, Vol. 60. pp. 357-364. 1989.

[Gir96] Girolami, M - Fyfe, C. "Higher Order Cumulant Maximisation Using Nonlinear Hebbian and Anti-Hebbian Learning for Adaptive Blind Separation of Source Signals", Proc. of the IEEE/IEE International Workshop on Signal and Image Processing, IWSIP-96, Advances in Computational Intelligence, Elsevier publishing, pp. 141 - 144, 1996.

[Gol89a] Goldberg, D. E. "Genetic Algorithms in Search, Optimization and Machine Learning", Addison-Wesley, 1989.

[Gol89b] Golub H. - Van Loan, C. F. “Matrix Computations”, Third Edition, Gene Johns Hopkins University Press, Baltimore and London, 1996.

[Gol96] Golden, R. M. "Mathematical Methods for Neural Network Analysis and Design", MIT Press, Cambridge, MA. 1996.

[Gon98] Gonzalez-Serrano, F. - Figueiras-Vidal, A. R. - Artés-Rodrigez, A. „Generalizing CMAC Architecture and Traning”, IEEE Trans. on Neural Networks, Vol. 9. pp. 1509-1514, 1998.

[Goo95] Goonatilake, S. - Khebbal, S. (Editors) "Intelligent Hybrid Systems", John Wiley and Sons, London, 1995.

[Gra03] Granas, A. - Dugundji, J. “Fixed Point Theory” Springer-Verlag, New York, 2003.

[Gre87] Grefenstette, J. "Genesis 4.5." GA software from gref@aic.nrl.navy.mil 1987.

[Gup03] Gupta, M. M - Jin, L. - Homma, N. ”Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory”, Wiley-IEEE Press, 2003.

[Guy96] Guyon, I. „A Scaling Law for the Validation-set Training-set Ratio”, Technical Report, AT&T Bell Laboratories. 1996.

[Had1902] Hadamard, J. “Sur les problèmes aux dérivées partielles et leur signification physique”, Princeton University Bulletin, 49--52. 1902.

[Ham90] Hambaba, M. L. "Nonlinear Principal Component Using Feedforward Neural Networks", Proc. of the Joint Conference on Neural Networks, 1990. pp. 3246-3248.

[Han89] Han , J. Y. - Sayeh, M. R. - Zhang, J. "Convergence and Limit Points of Neural Network and Its Application to Pattern Recognition", IEEE Trans. on Systems, Man, and Cybernetics, Vol. 19. No. 5. 1989. pp. 1217 1222,

[Hao04] Hao, X. – Zheng, P. – Xie. Zh. – Du, G. - Shen, F.”A predictive modeling for blast furnace by integrating neural network with partial least squares regression, Proc. of the IEEE International Conference on Industrial Technology CIT '04. Vol. 3. pp. 1162- 1167. 2004.

[Har93] Harrer, H. - Galias, Z. - Nossek, J. A. "On the Convergence of Discrete Time Neural Networks", International Journal of Circuit Theory and Applications. Vol. 21. 1993. pp. 191 195.

[Has92] Hassibi, B. - Stork, D. G. "Second Order Derivatives for Network Pruning: Optimal Brain Surgeon", in: Giles, C. L., Hanson, S. J., Cowan, J. D. (Editors) Advances in Neural Information Processing Systems, Vol. 5. San Mateo, CA. Morgan Kaufman, pp. 164-171. 1993.

[Has95] Hassoun, M. H.: "Fundamentals of Artificial Neural Networks", MIT Press, Cambridge, MA. 1995.

[Has97] Hashem S. "Optimal Linear Combinations of Neural Networks", Neural Networks, Vol. 10. No. 4. pp. 599-614. 1997.

[Has09] Hastie, T. - Tibshirani, R. - Friedman, J. ” The Elements of Sttistical Learning. Data Mining, Inference, and Prediction”, Springer, 2009.

[Hay99] Haykin, S. “Neural Networks. A Comprehensive Foundation”, Second Edition, Prentice Hall, 1999.

[He93] He, X. - Asada, H. “A New Method for Identifying Orders of Input-Output Models for Nonlinear Dynamic Systems”, Proc. of the American Control Conference, San Francisco, CA. USA. pp. 2520-2523. 1993.

[Hea02] Heath, M. T. “Scientific Computing, An Introductory Survey”, McGraw-Hill, New York. 2002.

[Heb49] Hebb, D. O. "The Organization of Behaviour", John Wiley and Sons, New York, 1949.

[Hec87] Hecht-Nielsen, R. „Kolmogorov’s Mapping Neural Network Existence Theorem” Proc. of the Int. Conference on Neural Networks, Vol. III. pp. 11-13. IEEE Press, N.Y. 1987.

[Hec89] Hecht-Nielsen, R. "Neurocomputing", Addison-Wesley Publishing Co. 1989.

[Her91] Hertz, J. - Krogh, A. - Palmer, R. G. "Introduction to the Theory of Neural Computation", Addison-Wesley Publishing Co. New York. 1991.

[Hin89] Hinton, G. E. “Connectionist Learning Procedures”, Artificial Intelligence, Vol. 41. pp. 185-234, 1989.

[Hir91] Hirose, Y. - Yamashita, K. - Hijiya, S. "Back-Propagation Algorithm Which Varies the Number of Hidden Units", Neural Networks, Vol. 4. No. 1. pp. 61-66. 1991.

[Hoe70] Hoerl, A. E. - Kennard, R. W. “Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, Vol. 12. No. 3. pp. 55-67, 1970.

[Hol75] Holland, J. H. "Adaptation in Natural and Artificial Systems", Ann Arbor, MI: University of Michigan Press, 1975.

[Hol77] Holland, P. W. - Welsch, R. E."Robust Regression Using Iteratively Reweighted Least-Squares," Communications in Statistics: Theory and Methods, A6, pp. 813-827. 1977.

[Hol92] Holland, J. H. "Genetikai algoritmusok", Tudomány, 1992. szept. pp. 28-34.

[Hop82]Hopfield, J. J. "Neurons Networks and Physical Systems with Emergent Collective Computational Abilities", Proc. Natl. Acad. Science. Vol. 79. pp. 2554-2558. 1982.

[Hor98] Horváth, G. (szerk), “Neurális hálózatok és műszaki alkalmazásaik” Egyetemi tankönyv, Műegyetemi Kiadó, Budapest. 1998.

[Hor99] Horváth, G. - Pataki, B. - Strausz, Gy. "Black box modeling of a complex industrial process", Proc. of the 1999 IEEE Conference and Workshop on Engineering of Computer Based Systems, Nashville, TN, USA. pp. 60-66. 1999.

[Hor07] Horváth, G. - Szabó, T. „Kernel CMAC with Improved Capability”, IEEE Trans. on Systems Man and Cybernetics, Part B. Vol. 37. pp. 124-138. 2007.

[Hu03] Hu, S. - Liao, X. - Mao, X. "Stochastic Hopfield Neural Networks", Journal of Physics A: Mathematical and General. Vol. 35. 2003. pp. 2235-2249.

[Hua88] Huang, W. Y. - Lippmann, R. P. “Neural Nets and Traditional Classifiers”, in: D. Z. Anderson (ed), Neural Information Processing Systems, (Denver, 1987) American Institute of Physics, pp. 387-396. New York, 1988.

[Hub81] Huber, P. J. ”Robust Statistics”, Wiley, 1981.

[Hun99] Hung S. L. - Jan, J. C. “MS_CMAC Neural Network Learning Model in Structural Engineering” Journal of Computing in Civil Engineering, pp. 1-11. 1999.

[Hyv97] Hyvärinen A. - Oja, E.”One-unit Learning Rules for Independent Component Analysis”, Advances in Neural Information Processing Systems, Vol. 9. pp. 480-486. Cambridge, MA. The MIT Press, 1997.

[Hyv00] Hyvärinen A. - Oja, E. ”Independent Component Analysis: Algorithms and Applications. Neural Networks, Vol. 13. No.4-5. pp. 411-430. 2000.

[Hyv01] Hyvärinen, A. - Karhunen, J. - Oja, E. ”Independent Component Analysis. John Wiley & Sons, New York, 2001.

[Hyv06] Hyvärinen A. – Shimizu, S. “A Quasi-stochastic Gradient Algorithm for Variance-dependent Component Analysis”, Proc. International Conference on Artificial Neural Networks (ICANN2006), Athens, Greece, pp. 211-220, 2006.

[Iig92] Iiguni, H. - Sakai, Y. "A Real-Time Learning Algorithm for a Multilayered Neural Network Based on the Extended Kalman Filter", IEEE Trans. on Signal Processing, Vol. 40. No. 4. pp. 959-966. 1992.

[Ish96] Ishikawa, M. "Structural Learning with Forgetting", Neural Networks, Vol. 9. No. 3. pp. 509-521. 1996.

[Jac91] Jacobs, R.A. - Jordan, M. I. - Nowlan, S. J. - Hinton, G. E. "Adaptive Mixture of Local Experts", Neural Computation, Vol. 3. pp. 79-89. 1991.

[Jan01] Jan J. C. - Hung, S. L. “High-Order MS_CMAC Neural Network,” IEEE Trans. on Neural Networks, vol. 12. pp. 598-603, May, 2001.

[Jen93] Jenkins, R. E. - Yuhas, B. P. „A Simplified Neural Network Solution Throgh Problem Decomposition: The Case of the Truck Backer-Upper”, IEEE. Trans. on Neural Networks, Vol. 4. No. 4. pp. 718-720. 1993.

[Joh92] Johansson, E. M. - Dowla, F. U. - Goodman, D. M. "Backpropagation Learning for Multilayer Feed-Forward Neural Networks Using the Conjugate Gradient Method", International Journal of Neural Systems, Vol. 2, No. 4. pp. 291-301. 1992.

[Jor94] Jordan, M. I. - Jacobs, R.A. "Hierarchical Mixture of Experts and the EM Algorithm", Neural Computation, Vol. 6. pp. 181-214. 1994.

[Jut88] Jutten, C. - Herault, J. "Independent component analysis versus PCA", Proceedings of EUSIPCO, pp. 643 - 646, 1988.

[Kar94] Karhunen, J. - Joutsensalo, J. "Representation and Separation of Signals using Nonlinear PCA Type Learning", Neural Networks, Vol. 7. No. 1. pp. 113-127. 1994.

[Kar97] Karhunen, J. - Hyvärinen, A. - Vigario, R. - Hurri, J. - and Oja, E. "Applications of Neural Blind Source Separation to Signal and Image Processing", Proc. of the IEEE 1997 International Conference on Acoustics, Speech, and Signal Processing (ICASSP'97), April 21 - 24. Munich, Germany, pp. 131-134. 1997.

[Kar98] Karhunen, J. - Pajunen, P., - Oja, E. "The Nonlinear PCA Criterion in Blind Source Separation: Relations with Other Approaches", Neurocomputing, Vol. 22. No. 1. pp. 5-20. 1998.

[Kea96] Kearns, M. „A Bound on the Error of Cross Validation using the Approximation and EstimationRates, with Consequences for the Training-Testing Split”, NIPS’95, D.S Touretzky, M. C. Mozer, M. E. Hasselmo (eds.) Advances in Neural Information Procesing Systems, MIT Press, MA. USA, 1996

[Ker95] Ker, J. S. - Kuo, Y. H. - Liu, B. D. "Hardware Realization of Higher-order CMAC Model for Color Calibration", Proceedings, of the IEEE International Conference on Neural Networks, Perth, Vol. 4. pp. 1656-1661. 1995.

[Kim90] Kim, S. P. - Bose, N. K. "Reconstruction of 2-D Bandlimited Discrete Signals from Nonuniform Samples", IEE Proceedings, Part F. Vol. 137. No. 3. pp. 197-204. 1990.

[Kne92] Knerr, S. - Personnaz, L. - Dreyfus, G. "Handwritten Digit Recognition by Neural Networks with Single-Layer Training", IEEE Trans. on Neural Networks, Vol. 3. 1992. pp. 962-969.

[Knu88] Knuth, D. E. "A számítógép programozás művészete 3. Keresés és rendezés", Műszaki Könyvkiadó, Budapest, 1988.

[Koh82] Kohonen, T. "Self-organized Formation of Topologically Correct Feature Maps", Biological Cybernetics, Vol. 434.. pp. 59-69. 1982

[Koh86] Kohonen, T. "Learning Vector Quantization for Pattern Recognition", Technical Report TKK-F-A601. Helsinki University of Technology. 1986.

[Koh88] Kohonen, T. "The 'Neural' Phonetic Typewriter", Computer. Vol. 21. 1988. pp. 11-22.

[Koh89] Kohonen, T. „Self-Organization and Associative Memory”, Third edtion, Springer, New York, 1989.

[Koh90] Kohonen, T. "Improved Versions of Learning Vector Quantization", International Joint Conference on Neural Networks. Vol. 1. pp. 545-550. 1990.

[Koh00] Kohonen, T. - Kaski, S. - Lagus, K. - Salojärvi, J. - Honkela, J. - Paatero, V. - Saarela, A. „Self Organization of a Massive Document Collection”. IEEE Transactions on Neural Networks, Special Issue on Neural Networks for Data Mining and Knowledge Discovery, Vol. 11, No. 3. pp. 574-585. 2000.

[Kol57] Kolmogorov, A. N. "On the Representation of Continuous Functions of Many Variables by Superposition of Continuous Functions of One Variable and addition", (Oroszul) Dokl. Akad. Nauk. USSR. Vol. 114. 1957. pp. 953-956.

[Kro95] Krogh, A. - Vedelsby, J. “Neural Network Ensembles, Cross Validation, and Active Learning”, G. Tesauro, D. S. Touretzky and T. K. Leen (eds.): Advances in Neural Information Processing Systems 7. pp. 231-238. MIT Press, Cambridge, MA: 1995.

[Kun90] Kung, S. Y. - Diamantaras, C.I. "A Neural Network Learning Algorithm for Adaptive Principal Component Extraction (APEX)", Proc. of the International Conference on Acoustics, Speech and Signal Processing, Vol. 2. pp. 861-864. 1990.

[Kůr92] Kůrková, V. „Kolmogorov Theorem and Multilayer Neural Networks”, Neural Networks, Vol. 5. pp. 501-506. 1992.

[Kůr97] Kůrková, V. - Kainen, P. C. - Kreinovich, V. "Estimates of the Number of Hidden Units and Variations with Respect to Half-Spaces", Neural Networks, Vol. 10. No. 6. pp. 1061-1068. 1997.

[Kwo03] Kwok, J. T. - Tsang, I. W. ”Linear Dependency Between ε and The Input Noise in ε-Support Vector Regression” IEEE Trans. on Neural Networks, Vol. 14. No.3. pp. 544-553. 2003.

[Lag02] Lagus, K. „Text retrieval using self-organized document maps”, Neural Processing Letters. Vol. 15, No. 1. pp. 21-29. 2002.

[Lag04] Lagus, K. - Kaski, S. - Kohonen, T. „Mining massive document collections by the WEBSOM method” Information Sciences, Vol 163. No. 1-3. pp. 135-156. 2004.

[Lan84] Landau, I. D. "A Feedback System Approach to Adaptive Filtering", IEEE Trans. on Information Theory, Vol. IT 30. No 2. pp. 251 267. 1984.

[Lan88] Lang K. J. –Hinton, G. E. “The development of the time-delay neural network architecture for speech recognition” Technical Report, CMU-CS-88-152, Carnegie-Mellon University, Pittsburg, PA.

[Lan92] Lane, S. H. - Handelman, D. A. - Gelfand, J. J "Theory and Development of Higher-Order CMAC Neural Networks", IEEE Control Systems Magazine, Vol. 12. No. 2. pp. 23-30. 1992.

[Lau06] Lauer, F. - Bloch, G. ”Incorporating Prior Knowledge in Support Vector Machines for Classification: a Review”, HAL-CCSd-CNRS, 2006. http://hal.ccsd.cnrs.fr/docs/00/06/35/21/PDF/LauerBlochNeurocomp06.pdf".

[Le06] Le, Q. V. - Smola, A. J. - Gärtner, T “ Simpler Knowledge-based Support Vector Machines”, Proc. of the 23rd International Conference on Machine Learning, Pittsburg, PA. pp. 521-528. 2006.

[LeC89] LeCun, Y. - Boser, B. - Denker, J. S. "Backpropagation Applied to Handwritten Zip Code Recognition", Neural Computation, Vol. 1. pp. 541-551. 1989.

[LeC90] Le Cun, Y. - Denker, J. S. - Solla, S. "Optimal Brain Damage", in: Touretzky, D. (Ed.) Advances in Neural Information Processing Systems, Vol. 2. San Mateo, CA. Morgan Kaufman, pp. 598-605. 1990.

[Lee01a] Lee, Y-J. - Mangasarian, O. L.”SSVM: A Smooth Support Vector Machine”, Computational Optimization and Applications, Vol. 20. No. 1. pp. 5-22. 2001.

[Lee01b] Lee, Y-J. - Mangasarian, O. L., “RSVM: Reduced Support Vector Machines”, Proc. of the First SIAM International Conference on Data Mining, Chicago, 2001.

[Lee03] Lee, H. M. Chen C. M. - Lu, Y. F. “A Self-Organizing HCMAC Neural-Network Classifier,” IEEE Trans. on Neural Networks, vol. 14. pp. 15-27. Jan. 2003.

[Lei91] Leighton, R. - Conrath, B. "The Autoregressive Backpropagation Algorithm", Proc. of the 1991 International Joint Conference on Neural Networks, IJCNN’91. Vol. 2. pp. 369-377. 1991.

[Les93] Leshno, M. - Lin, V. Y. - Pinkus, A. - Schocken, S. "Multilayer Feedforward Networks With a Nonpolynomial Activation Function Can Approximate Any Function", Neural Networks, Vol. 6. pp. 861-867. 1993.

[Leu01] Leung, C. S. - Tsoi, A. C.„Two-regularizers for Recursive Least Squared Algorithms in Feedforward Multilayerd Neural Networks”, IEEE Trans. on Neural Networks, vol. 12, pp. 1314-1332. 2001.

[Lew96] Lewis, F. Yesildirek, A. Liu, K. “Multilayer neural-net robot controller with guaranteed tracking performance”. IEEE Trans. on Neural Networks, Vol. 7. No. 2. pp. 388-399. 1996.

[Li04] Li C. K. - Chiang, C. T. “Neural Networks Composed of Single-variable CMACs,” Proc. of the 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, pp. 3482-3487, 2004.

[Li05] Li, H., Shi, K., McLaren, P., „Neural-Network-Based Sensorless Maximum Wind Energy Capture with Compensated Power Coefficient”, IEEE Trans. on Industry Applications, Vol. 41, No. 6. pp. 1548-1556. 2005.

[Lin96] Lin C. S. - Li, C. K. “A Low-Dimensional-CMAC-Based Neural Network”, Proceedings of IEEE International Conference on Systems, Man and Cybernetics Vol. 2. pp. 1297-1302. 1996.

[Lin98] Lindsey, C. S. - Lindblad, T. "Review of Hardware Neural Networks: A User’s Perspective", http://msia02.msi.se/~lindsey/ Oct. 1998.

[Lip92] Lipschitz, S. P. - Wannamaker, R. A. "Quantization and Dither a Theoretical Survey", Journal of the Audio Engineering Society, Vol. 40. No. 5. pp. 355-375. 1992.

[Lju99] Ljung, L.: “System Identification, Theory for the User”, Second edition, Prentice Hall, Upper Saddle River, N. J. 1999.

[Lor76] Lorentz, G. G. "The 13-th Problem of Hilbert", Proc. of Symposia in Pure Mathemathics, Vol. 28. pp. 419-430. 1976.

[Lu99] Lu, B. L. Ito, M. „Task decomposition and Module Combination Based on Class Relations: A Modular Neural Network for Pattern Classification” IEEE. Trans. on Neural Networks, Vol. 10. No. 5. pp. 1244-1256. 1999.

[Mac95] Maclin, R. F., "Learning from Instruction and Experience: Incorporating Procedural Domain Theories into Knowledge-Based Neural Networks", Ph.D Thesis, University of Wisconsin, Madison, USA, 1995.

[Mai99] Maiorov, V. - Pinkus, A. ”Lower Bounds for Approximation by MLP Neural Networks”, Neurocomputing, Vol. 25, pp. 81-91.1999.

[Mal96] Malthouse, E. C. "Some Theoretical Results on Nonlinear Principal Components Analysis", ftp from http://skew2.kellogg.nwu.edu/~ecm, 24 p. 1996.

[Man04] Mangasarian, O. L. - Shavlik, J. - Wild, E. W. “Knowledge-Based Kernel Approximation”, Journal of Machine Learning Research, Vol. 5. pp. 1127-1141. 2004.

[Mát65] Mátyás, J. "Random Optimization", Automation and Remote Control, Vol. 26. pp. 246-253. 1965.

[McC43] McCulloch, W. S. - Pitts W. "A Logical Calculus of Ideas Immanent in Nervous Activity", Bulletin of Mathematical Biophysics, pp. 115-133. 1943.

[Mer06] Merler,S. - Jurman, G. “ Terminated Ramp-Support Vector Machines: A nonparametric data dependent kernel”, Neural Networks, Vol. 19. No. 10. pp. 1597-1611. 2006.

[Mha94] Mhaskar,H. N. „Approximation of Real Functions using Neural Networks”, in: H. P. Dikshit, C. A. Micchelli (eds.) Advances in Computational Mathematics, New Delhi, India. World Scientific, Singapore, pp. 267-278. 1994.

[Mik99] Mika, S. - Schölkopf, B. - Smola, A. - Müller, K-R. - Schultz, M. - Rätsch, G. “Kernel PCA and De-Noising in Feature Spaces”, in: M. S. Kearns, S. A. Solla, D. A. Kohn (eds.) Advances in Neural Information Processing Systems, Vol. 11. pp. 536-542. Cambridge, MA. The MIT Press, 1999.

[Mil91] Miller, W. T. - Box, B. A. - Whitney E. C. "Design and Implementation of a High Speed CMAC Neural Network Using Programmable CMOS Logic Cell Arrays", Advances in Neural Information Processing Systems 3. pp. 1022-1027. 1991.

[Min69] Minsky, M. - Papert, S. "Perceptrons", MIT Press, Cambridge MA. 1969.

[Mit97] Mitchell, T. ”Machine Learning”, McGraw-Hill, 1997.

[Mon89] Montana, D. L. - Davis, L "Training Feedforward Neural Networks Using Genetic Algorithms", Proc. of the Eleventh IJCAI, 1989. pp. 762-767. 1989.

[Moo88] Moody, J - Darken, C. "Learning with Localized Receptive Fields", Proceedings of the 1988 Connectionist Models Summer School, Pittsburg, D Touretzky, G. Hinton, T. Sejnowsky (eds.) pp. 174-185. San Mateo, Morgan Kaufmann, 1988.

[Moo89] Moody, J. "Fast Learning in Multi-Resolution Hierarchies", Advances in Neural Information Processing, 1. Toureczky (Ed.) Los Altos, CA. Morgan Kaufmann, pp. 474-481. 1989.

[Mor77] Moré, J. J. ”The Levenberg-Marquardt Algorithm, Implementation and Theory” G. A Watson (ed.) Numerical Analysis, Lecture Notes in Mathematics, Vol. 630. pp. 105-116, Springer, 1977.

[Muk97] Mukherjee, S. - Osuna, E. - Girosi, F. “Nonlinear Prediction of Chaotic Time Series using a Support Vector Machine”, IEEE Workshop on Neural Networks for Signal Processing, NNSP'97, pp. 511-520. 1997.

[Mur94] N. Murata, S. Yoshizawa and Shun-Ichi Amari, Network Information Criterion - Determining the Number of Hidden Units for an Artificial Neural Network Model, IEEE Trans. on Neural Networks, Vol. 5. No. 6. pp. 865-871. 1994.

[Mül97] Müller, K. R. - Smola, A. - Schölkopf, B. - Rätsch, G. - Kohlmorgen, J. - Vapnik, V. “Predicting Time Series with Support Vector Machines”, Proc. of the Articial Neural Networks, ICANN'97, pp. 999-1004. Springer, 1997.

[Nak97] Nakanishi, K. - Tkayama, H. "Mean-field theory for a spin-glass model of neural networks: TAP free energy and the paramagnetic to spin-glass transition ", Journal of Physics A: Mathematical and General. Vol. 30. 1997. 8085-8094

[Nar83] Narayan, S. S. - Peterson, A.M. - Narashima, M. J. „Transform domain LMS algorithm” IEEE Trans. on Audio Speech and Signal Processing, Vol. 31, pp. 631-639. 1983.

[Nar89b] Narendra, K. S. - Thathachar, M. A. L "Learning Automata: An Introduction", Prentice Hall, Englewood Cliffs, N. J. 1989.

[Nar90] Narendra, K. S. - Pathasarathy, K. "Identification and Control of Dynamical Systems Using Neural Networks," IEEE Trans. Neural Networks, Vol. 1. pp. 4 27. 1990.

[Nar91] Narendra, K. S. - Pathasarathy, K. "Identification and Control of Dynamic Systems Using Neural Networks", IEEE Trans. on Neural Networks, Vol. 2. pp. 252-262. 1991.

[Nar05] Narendra, K. S. - Annaswamy, A. M. ”Stable Adaptive Systems”, Dover Publications, 2005.

[Ner93] Nerrand, O. - Roussel-Ragot, P. - Personnaz, L. - Dreyfus, G. " Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms", Neural Computation, Vol. 5. pp. 165-199. 1993.

[Ngu89] Nguyen, D. - Widrow, B. “The Truck Backer-Upper: An Example of Self-Learning” Proc. of the Neural Networks IEEE International Joint Conference on Neural Networks, IJCNN'89, Vol. II, p. II-357-II-364, IEEE, 1989.

[Ngu90] Nguyen, D. - Widrow, B. „Improving the Learning Speed of 2-Layer Neural Networks by Choosing Initial Values of the Adaptive Weights”, Proc. of the Intnl. Joint Conference on Neural Networks, Vol. III. pp. 21-26. 1990.

[Nil65] Nilsson, N. J. "Learning Machines", McGraw-Hill, New York, 1965.

[Niy98] Niyogi, P. - Girosi, F. - Poggio, T. “Incorporating Prior Information in Machine Learning by Creating Virtual Examples” Proc. of the IEEE. Vol. 86. No. 11. pp. 2196-2209. 1998.

[Nør00] Nørgaard, M. Ravn, O. - Poulsen N. K. - Hansen, L. K. “Neural Networks for Modelling and Control of Dynamic Systems”, Springer-Verlag, London, 2000.

[Oja82] Oja, E. "A Simplified Neuron Model as a Principal Component Analyzer", Journal of Mathematical Biology, Vol. 15. pp. 267-273. 1982.

[Oja83] Oja, E. “Subspace Methods for Pattern Recognition”, Research Studies Press, Letchworth, Hertfordshire, England, 1983.

[Opi95] Opitz, D. W. "An Anytime Approach to Connectionist Theory Refinement: Refining the Topolgies of Knowledge-Based Neural Networks", Ph.D Thesis, University of Wisconsin, Madison, USA, 1995.

[Osu99] Osuna, E. - Girosi, F. “Reducing run-time complexity in SVMs”, in: B. Schölkopf, C. Burges, A. J. Smola (eds.) Advances in Kernel Methods − Support Vector Learning. MIT Press, Cambridge, MA. pp. 271-284. MIT Press. 1999.

[Par91] Park, J. - Sandberg, I. W. „Universal Approximation using Radial Basis Function Networks” Neural Computation, Vol. 3. pp. 246-257, 1991.

[Par92] Parks, P. C. - Militzer, J. "A Comparison of Five Algorithms for the Training of CMAC Memories for Learning Control Systems", Automatica, Vol. 28. No. 5. pp. 1027-1035. 1992.

[Par93] Park, J. - Sandberg, I. W. "Approximation and Radial-Basis-Function Networks", Neural Computation, Vol 5. No. 2. pp. 305-316. 1993.

[Pat98] Pataki, B. - Horváth, G. - Strausz, Gy. "Effects of Database Characteristics on the Neural Modeling of an Industrial Process", Proc. of the International ISCS/IFAC Symposium on Neural Computation (NC'98). Vienna. pp. 834-840. 1998.

[Pat00] Pataki, B., Horváth, G., Strausz, Gy., Talata, Zs. "Inverse Neural Modeling of a Linz- Donawitz Steel Converter" e & i Elektrotechnik und Informationstechnik, Vol. 117. No. 1. pp. 13-17. 2000.

[Pav05] Pavlovic, V. - Schonfeld, D. – Friedman G. "Stochastic Noise Process Enhancement of Hopfield Neural Networks", IEEE Trans. on Circuits and Systems, Vol. 52. No. 4., pp. 213-217, 2005.

[Pel04] Pelckmans, K. - Espinoza, M. - De Brabanter, J. - Suykens, J. A. K. - De Moor, B. ”Primal-Dual Monoton Kernel Regression” Neural Processing Letters, Vol. 4. No. 6. pp. 17-31. 2004.

[Pen55] Penrose, R. "A Generalized Inverse for Matrices" Proc. of the Cambridge Philosophical Society, Vol. 51. pp. 406-413. 1955.

[Pet72] Peterson, W. W. – Weldon, E. J. “Error-correcting Codes”, second edition, MIT Press, Cambridge, MA. 1972.

[Pie99] Pietruschka, U., Brause, R., „Using Growing RBF-Nets in Rubber Industry Process Control”, Neural Computing & Applications, No. 8, 1999, pp. 95-105.

[Pin01] Pintelon, R. - Schoukens, J. “System Identification, The Frequency Domain Approach”, MIT Press. 2001.

[Pla86] Plaut, D. C. - Nowlan, S. J. - Hinton, G. E. "Experiments on Learning with Back-propagation", Technical Report, CMU-CS-86-126. Carnegie-Mellon University, 1986.

[Pla99] Platt, J. C. “Fast Training of Support Vector Machines using Sequential Minimal Optimization”, in: B. Schölkopf, C. Burges, A. J. Smola (eds.) Advances in Kernel Methods − Support Vector Learning. MIT Press, Cambridge, MA. pp. 185-208. 1999.

[Plu93] Plumbley, M. “A Hebbian /anti-Hebbian network which Optimizes Information Capacity by Orthonormalizing the Principal Subspace” Proc. IEE Conference on Artificial Neural Networks, Brighton, UK. pp. 86-90. 1993

[Pog90] Poggio, T. - Girosi, F. „Networks for approximation and Learning”, Proc. of the IEEE, Vol. 78. pp. 1481-1497. 1990.

[Pom91] Pomerleau, D. A. "Efficient Training of Artificial Neural Networks for Autonomous Navigation", Neural Computations, Vol. 3. No. 1. pp. 88-97. 1991.

[Pra96] Prabhu, S. M. - Garg, D. P.”Artificial Neural Network Based Robot Control: An Overview” Journal of Intelligent and Robotic Systems, Vol. 15, No. 4, pp. 333-365. 1996.

[Pre02] Press, W. H. - Teukolsky, S. A. - Wetterling W. T. - Flannery, B. P. “Numerical Recipes in C”, Cambridge University Press, Books On-Line, http://www.nr.com, 2002.

[Rad93] Radcliffe, N. "Genetic Set Recombination and its Application to Neural Network Topology Optimisation", Neural computing and application, Vol. 1. No. 1. pp. 67-90. 1993.

[Ral04] Ralaivola, L. - d’Alché-Buc, F.”Dynamical Modeling with Kernels for Nonlinear Time Series Prediction” Advances in Neural Information Processing Systems, S. Thrun, L. Saul, B. Schölkopf (Eds.) MIT press, MA, 2004.

[Ris78] Rissanen, J. “Modelling by Shortest Data Description”, Automatica, Vol. 14. pp. 465-471, 1978.

[Riv94] Rivals, L. - Cannas, D. - Personnaz, L. - Dreyfus, G. “Modelling and Control of Mobile Robots and Intelligent Vehicles by Neural Networks”, Proc. of the IEEE Conference on Intelligent Vehicles, pp. 137-142. 1994.

[Rob91] Robinson, T. - Fallside, F. "A recurrent error propagation network speech recognition system," Computer Speech & Language, Vol. 5. No. 3. pp. 259-274. 1991.

[Ros58] Rosenblatt, F. "The Perceptron: A Probabilistic Model for Information Storage and Organization of the Brain", Psycol. Rev., Vol. 65. pp. 386-408. 1958.

[Róz91] Rózsa P. „Lineáris algebra és alkalmazásai”, 3. átdolgozott kiadás, Tankönyvkiadó, Budapest, 1991.

[Rum86] Rumelhart, D. E. - Hinton, G. E. - Williams, R. J. "Learning Internal Representations by Error Propagation", in Rumelhart, D. E. - McClelland, J. L. (Eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1. MIT Press. pp. 318-362. 1986.

[Rus05] Russel, S. - Norvig, P. ”Mesterséges intelligencia modern megközelítésben”, magyar fordítás Panem, 2005

[San89] Sanger, T. "Optimal Unsupervised Learning in a Single-layer Linear Feedforward Neural Network", Neural Networks, Vol. 2. No. 6. pp. 459-473. 1989.

[San93] Sanger, T. "Relation Between the APEX Algorithm and GHA", Manuscript on ftp 1993.

[Sca92] Scalero, R. S. - Tepedelenlioglu, N. "A Fast New Algorithm for Training Feedforward Neural Networks", IEEE Trans. on Signal Processing, Vol. 40. No. 1. pp. 202-210. 1992.

[Sca98] Scarselli, F. - Tsoi, A. H. "Universal Approximation Using Feedforward Neural Networks: A Survey of Some Existing Methods and Some New Results", Neural Networks, Vol. 11. No. 1. pp.15-37. 1998.

[Sch96a] Schölkopf, B. - Sung, K. - Burges, C. - Girosi, F- Niyogi, P. - Poggio, T. - Vapnik, V. ”Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers”, AIM-1599. MIT. 1996. ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1599.pdf

[Sch96b] Schölkopf, B. - Burges, C. - Vapnik, V. “ Incorporating Invariances in Support Vector Learning Machines” in: C. von der Malsburg, W. von Seelen, J. C. Vorbrüggen, B. Sendhoff (eds): Artificial Neural Networks- ICANN’96, Springer Lector Notes in Computer Sciences. Vol. 1112, Berlin pp. 47-52. 1996.

[Sch96c] Schölkopf, B. - Smola, A. - Müller, K.-R. “Nonlinear Component Analysis as a Kernel Eigenvalue Problem”, Technical Report No. 44. Max-Planck-Institue für biologische Kibernetik. 1996.

[Sch97] Schölkopf, B. “Support Vector Learning”, R. Oldenburg Verlag, Munich, 1997.

[Sch99] Schölkopf, B. - Burges, C. - Smola, A. “Advances in Kernel Methods Support Vector Learning”, MIT Press, MA. USA. 1999.

[Sch02] Schölkopf, B. - Smola, A. “Learning with Kernels Support Vector Machines, Regularization, Optimization and Beyond”, MIT Press, MA. USA. 2002.

[Scha90] Schapire, R. E. „The Strength of Weak Learnability”, Machine Learning, Vol. 5. pp. 197-227. 1990.

[Scha99] Schapire, R. E., „A Brief Introduction to Boosting”, Proc. of the Sixteenth International Joint Conference on Artificial Intelligence, (IJCAI-99) Vol. 2. pp. 1401-1406, 1999.

[Scha02] Schapire, R. E., ”The Boosting Approach to Machine Learning: An Overview”, MSRI Workshop on Nonlinear Estimation and Classification, Berkeley, CA, 2002.

[Sej87] Sejnowski, T. J. - Rosenberg, C. R. "Parallel Networks that Learn to Pronounce English Text", Complex Systems, Vol. 1. pp. 145-168. 1987.

[Šer96] Šerbedžija, N. B. "Simulating Artifical Neural Networks on Parallel Architectures", Computer IEEE, March, pp. 56-63. 1996.

[Sha92] Shah, S. - Palmieri, F. - Datum, M. "Optimal Filtering Algoritms for Fast Learning in Feedforward Neural Networks", Neural Networks, Vol. 5. pp. 779-787. 1992.

[Sha02] Shawe-Taylor, J. - Cristianini, N. “On the Generalization of Soft Margin Algorithms”, IEEE Trans. on Information Theory, Vol. 48. pp. 2721-2735. 2002.

[Sim00] Simoes, M., Furukawa, C., Mafra, A., Adamowski, J. C., „A Novel Compitetive Learning Neural Network Based Acoustic Transmission System for Oil-Well Monitoring”, IEEE Transactions on Industry Applications, Vol. 36, No. 2, pp. 484-491. 2000.

[Sjö95] Sjöberg, J. - Zhang, Q. - Ljung, L. - Benveniste, A. - Deylon, B. - Glorennec, P.-Y. - Hjalmarsson, H. - Juditsky, A. ”Nonlinear black-Box Modeling in System Identification: a Unified Overview” Technical Report, Linköping University, No: LiTH-ISY-R-1742, 1995.

[Sma06] Van Der Smagt, P. – Omidvar, O. M. (eds), ”Neural Systems for Robotics” Academis Press, 2006.

[Sol81] Solis, F. J, -Wets, J. B. "Minimization by Random Search Techniques", Mathematics of Operation Research, Vol. 6. pp. 19-30. 1981.

[Sol96] Sollich, P. - Krogh, A. ”Learning with ensembles: How overfitting can be useful”, Advances in Neural Information Processing Systems, Vol. 8, pp. 190-196, The MIT Press, 1996.

[Spr65] Sprecher, D. A. „Ont he Structure of Continuous Functions of Several Variables”, Transactions American Mathemathical Society, Vol. 115, pp. 340-355. 1965.

[Spe93] Sperduti, A. - Starita, A. "Speed Up Learning and Network Optimization With Extended Back Propagation", Neural Networks, Vol. 6. pp. 365 383. 1993.

[Str98] Strausz, Gy. - Horváth, G. - Pataki, B.„Experiences from the results of neural modelling of an industrial process” Proc. of Engineering Application of Neural Networks, EANN'98, Gibraltar pp. 213-220. 1998.

[Sto78] Stone, M. "Cross-Validation: A Review", Mathematische Operationsforschung Statistischen, Vol. 9. pp. 127-140. 1978.

[Su03] Su, S. F. - Tao T. - Hung, T. H. “Credit assigned CMAC and Its Application to Online Learning Robot Controllers,” IEEE Trans. on Systems Man and Cybernetics- Part B. Vol. 33, pp. 202-213. 2003.

[Sum99] Sum, J. - Leung, C. - Young, G. - Kay, W. „On the Kalman-filtering Method in Neural Network Training and Pruning” IEEE Trans. on Neural Networks, Vol. 10, pp. 161-166. 1999.

[Sun00] Sun, Y. "Hopfield Neural Network Based Algorithms for Image Restoration and Reconstruction", IEEE Trans. on Signal Processing, Vol. 48, No. 7, July 2000. pp. 2105 2131.

[Sun05] Sun, Q. - DeJong, G. “Explanation-Augmented SVM: an Approach to Incorporating Knowledge into SVM Learning”, Proc. of the 22nd International Conference on Machine Learning, Bonn, Germany. pp. 864-871. 2005.

[Sut98] Sutton, R. S., - Barto, A. G. "Reinforcement Learning: An Introduction", Bradford Book, MIT Press, Cambridge, 1998.

[Suy99] Suykens, J. - deMoor, B. – Vandewalle, G. E. “The development of the time-delay neural network architecture for speech recognition” Technical Report, CMU-CS-88-152, Carnegie-Mellon University, Pittsburg, PA. 1999.

[Suy00] Suykens, J. A. K. - Lukas, L. - Vandewalle, J. “Sparse least squares support vector machine classifiers”, ESANN'2000 European Symposium on Artificial Neural Networks, pp. 37–42. 2000,

[Suy02a] Suykens, J. A. K. - Gestel, V. T. - De Brabanter, J. - De Moor, B. - Vandewalle, J. “Least Squares Support Vector Machines”, World Scientific, 2002.

[Suy02b] Suykens, J. A. K. De Brabanter, J. - Lukas, L. - Vandewalle, J. “Weighted least squares support vector machines: robustness and sparse approximation”, Neurocomputing, Vol. 48. No. 1-4. pp. 85-105. 2002.

[Szők68] Szőkefalvi-Nagy Béla: “Valós függvények és függvénysorok”, Tankönyvkiadó, Budapest, 1968.

[Tak93] Takahashi, Y.„ Generalization and Approximation Capabilities of Multilayer Networks, Neural Computation, Vol. 5. pp. 132-139. 1993.

[Tak03] Takács G. ”Irányítószám-felismerő rendszer”, TDK dolgozat, BME VIK. 2003.

[Tik77] Tikhonov, A. N. - Arsenin, V. Y. "Solutions of Ill-posed Problems", Washington, DC: W. H. Winston, 1997.

[Tow91] Towell, G. G. "Symbolic Knowledge and Neural Networks: Insertion, Refinement and Extraction", Ph.D Thesis, University of Wisconsin, Madison, USA, 1991.

[Val84] Valiant, L. G. „A theory of the learnable” Communications of the ACM, Vol. 27. pp. 1134-1142, 1984.

[Val03] Valyon, J. - Horváth, G. „A generalized LS–SVM”, SYSID'2003, Rotterdam, pp. 827-832. 2003,

[Val04] Valyon, J. - Horváth, G., „A Sparse Least Squares Support Vector Machine Classifier”, Proc. of the International Joint Conference on Neural Networks, IJCNN 2004, pp. 543-548. 2004.

[Van92] Van Ooyen, A. "Improving the Convergence of the Back Propagation Algorithm", Neural Networks, Vol. 5. pp. 465 471. 1992.

[Van00] Van Gorp, - J. Schoukens, J. - Pintelon, R. „Learning Neural Networks with Noisy Inputs Using the Errors-in-Variables Approach”, IEEE Trans. on Neural Networks, Vol. 11. No. 2. pp. 402-414. 2000.

[Vap79] Vapnik, W. N. - Tschervonenkis, A. Ja. ”Teorie der Zeichenerkennung”, Akademia, Berlin, 1979.

[Vap91 Vapnik, V. Chervonenkis, A. Ya. „The Necessarry and Sufficient Conditions for Consistency of the Method of Empirical Risk Minimization”, Pattern Recognition and Image Analysis, Vol. 1. pp. 284-305, 1991.

[Vap95] Vapnik, V. N. “The Nature of Statistical Learning Theory”, Springer, New York, 1995.

[Vap98] Vapnik, V. N. “Statistical Learning Theory”, Wiley-Intrescience, 1998.

[Vap99] Vapnik, V. N. "An overview of statistical learning theory" IEEE Trans. on Neural Networks, Vol. 10. No. 5. pp. 988-1000. 1999.

[Vee95] Veelenturf, L. P. J. "Analysis and Application of Artificial Neural Networks", Prentice Hall, London, 1995.

[Vid93] Vidyasagar, M. "Convergence of Higher-Order Neural Networks With Modified Updating", Proc. of the Intntl. Conference on Neural Networks, Vol.3. pp. 1379-1384. 1993.

[Vin00] Vincent, P. - Bengio, J. ”A Neural support Vector Network Architecture with Adaptive Kernels”, Proc. of the Intnl. Joint Conference on Neural Networks, IJCNN 2000. pp. 187-192. 2000.

[Vog88] Vogl, T. P. - Manglis, J. K. - Rigler, A. K. - Zink, W. T. - Alkon, D. L. "Accelerating the Convergence of the Back-propagation Method", Biological Cybernetics, Vol. 59. pp. 257-263. 1988.

[Wai89] Waibel, A. "Modular construction of time-delay neural networks for speech recognition", Neural Computation, Vol. 1. No. 1. pp. 39-46. 1989.

[Wal91] Wallace, G. K. “The JPEG Still Picture Compression Standard”, Communication of the ACM, Vol. 34, No. 4, pp. 30-44. 1991.

[Wan90] Wan, E. A. "Temporal Backpropagation for FIR Neural Networks", Proc. of the Intnl. Joint Conference on Neural Networks, IJCNN 1990, Vol. I. pp. 575-580. 1990.

[Wan93] Wang, Y. "Analog CMOS Implementation of Backward Error Propagation", Proc. of the Intnl. Joint Conference on Neural Networks, IJCNN 1993, pp. 701-706. 1993.

[Wan94] Wan, E. A. "Time Series Prediction by Using a Connectionist Network with Internal Delay Lines", Time Series Prediction: Forecasting the Future and Understanding the Past, (Eds: Eigend, A.S. - Gershenfeld, N.A) pp. 195-217. Addison Wesley, 1994.

[Wan96a] Wang, Z. Q. - Schiano J. L. - Ginsberg, M. “Hash Coding in CMAC Neural Networks,” Proc. of the IEEE International Conference on Neural Networks, Washington, USA. vol. 3. pp. 1698-1703, 1996.

[Wan96b] Wang, L-Y. - Karhunen, J. “A Unified Neural Bigradient Algorithm for Robust PCA and MCA”, International Journal of Neural Systems, Vol. 7. No. 1. pp. 53-67. 1996.

[Wei94] Weigend, A. S. - Gershenfeld, N. A. "Forecasting the Future and Understanding the Past" Vol.15. Santa Fe Institute Studies in the Science of Complexity, Reading, MA. Addison-Wesley, 1994.

[Wer74] Werbos, P. J. "Beyond Regression: New Tools for Prediction and Analysis in the Behaviour Sciences", Ph. D Thesis, Harvard University, Cambridge, MA. 1974.

[Wer93] Werntges, H. W. “Partitions of Unity Improve Neural Function Approximations”, Proc. of the IEEE International Conference on Neural Networks, San Francisco, USA. Vol. II. pp. 914-918, 1993.

[Wer00] Wermter, S. - Sun, R. (eds.), “Hybrid Neural Systems”, Lecture Notes in Artificial Intelligence, LNAI 1778. Springer, 2000.

[Wid60] Widrow, B. - Hoff, M. E. "Adaptive Switching Circuits" IRE WESCON Convention Record, pp. 96-104. 1960.

[Wid85] Widrow, B. - Stearns, S. D. "Adaptive Signal Processing", Prentice-Hall, Englewood Cliffs, N. J. 1985.

[Wid90b] Widrow, B. – Gluck, M. A. “Adaptive Neural Networks”, 1990. Annual International Course on Neural Networks, London. 1990.

[Wil89] Williams, R. J. - Zipser, D. "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks", Neural Computation, Vol. 1. pp. 270-280. 1989.

[Won93] Wong, Yiu-fai "CMAC Learning is Governed by a Single Parameter", Proc. of the IEEE International Conference on Neural Networks, IJCNN 1993. pp. 1439-1443. 1993.

[Wu04] Wu, X. - Srihari, R. “Incorporating Prior Knowledge with Weighted Margin Support Vector Machines”, Proc. of the Tenth International Conference on Knowledge Discovery and Data Mining, KDD’04, Seattle, Washington, USA, pp. 326—333. 2004.

[Zar00] Zarate, L. E. "A Model for the Simulation of a Cold Rolling Mill, Using Neural Networks and Sensitivity Factors,",  VI. Brazilian Symposium on Neural Networks (SBRN'00), p. 185-190. 2000.

[Zee97] Zeevi, A. - Meir, R. - Adler, R.”Time series prediction using mixtures of experts”, Advances in Neural Information Processing Systems 9, pp. 309-315, MIT Press, 1997.

[Zha92] Zhang, Q. - Benveniste, A. „Wavelet Networks”, IEEE Trans. on Neural Networks, Vol. 3. pp. 889-898, 1992.

[Zha97] Zhang, Q. „Using Wavelet Networks in Nonparametric Estimation”, IEEE Trans. on Neural Networks, Vol. 8. pp. 227-236, 1997.

[Zha05] Zhang, H. – Wu, Y. – Peng, Q. " Image Restoration Using Hopfield Neural Network Based on Total Variational Model ", Advances in Neural Networks – ISNN 2005, Lecture Notes in Computer Science., Springer Verlag, Berlin, pp. 735-740, 2005.

[Zho97] Zhong, L. - Zhongming Z. - Chongguang, Z. “The Unfavorable Effects of Hash Coding on CMAC Convergence and Compensatory Measure,” IEEE International Conference on Intelligent Processing Systems, Beijing, China, pp. 419-422, 1997.

[Xio04] Xiong, H. - Swamy, M. N. S. - Ahmad, M. O. ”Learning with the Optimized Data-Dependent Kernel” Proc. of the 2004 Computer Society Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’04), pp. 95-98. 2004.

[Xio05] Xiong, H. - Swamy, M. N. S. - Ahmad, M. O.” Optimizing the Kernel in the Empirical Feature Space”, IEEE Trans. on Neural Networks, Vol. 16. No.2 pp. 460-474, 2005.

[Yan00] Yang, S. X. - Meng, M. Q.-H.: An efficient neural network approach to dynamic robot motion planning. Neural Networks, Vol.13. No. 2. pp. 143-148. 2000.

[Yu95] Yu, X. H. - Chen, G. A. - Cheng, S. X. "Dynamic Learning Rate Optimization of the Backpropagation Algorithm", IEEE Trans. on Neural Networks, Vol. 6. No. 3. pp. 669-677. 1995.

[Yu97] Yu, X. H. - Chen, G. A. "Efficient Backpropagation Learning Using Optimal Learning Rate and Momentum", Neural Networks, Vol. 10. No. 3. pp. 517-527. 1997.

[Yu04] Yu, T. - Jan, T. - Debenham, J. - Simoff, S. “Incorporating Prior Domain Knowledge in Machine Learning: A Review”, Proc. of the IEEE Intntl. Conference on Advances in Intelligent Systems Theory and Applications, Luxembourg, 2004.