Conrprop: un algoritmo para la optimización de funciones no lineales con restricciones
DOI:
https://doi.org/10.17533/udea.redin.14944Keywords:
Optimización no lineal, restricciones, propagación hacia atrás, rpropAbstract
Resilent backpropagation (RPROP) es una poderosa técnica de optimización basada en gradientes que ha sido comúnmente usada para el entrenamiento de redes neuronales artificiales, la cual usa una velocidad por cada parámetro en el modelo. Aunque esta técnica es capaz de resolver problemas de optimización multivariada sin restricciones, no hay referencias sobre su uso en la literatura de investigación de operaciones. En este artículo, se propone una modificación de resilent backpropagation que permite resolver problemas no lineales de optimización sujetos a restricciones generales no lineales. El algoritmo propuesto fue probado usando seis problemas comunes de prueba; para todos los casos, el algoritmo de resilent backpropagation restringido encontró la solución óptima, y para algunos casos encontró un punto óptimo mejor que el reportado en la literatura.
Downloads
References
P. M. Pardalos, M. G. C. Resende. Handbook of Applied Optimization. Ed. Oxford University Press. New York. 2002. pp. 263-299.
Y. LeCun, L. Bottou, B. Orr, K. R. Muller. “Efficient Backprop”. Neural Networks - Tricks of the Trade, Springer Lecture Notes in Computer Sciences. Vol. 1524. 1998. pp. 5–50, DOI: https://doi.org/10.1007/3-540-49430-8_2
M. Riedmiller, H. Braun. “A direct adaptive method for faster backpropagation learning: The RPROP algorithm”. Proceedings of the IEEE International Conference on Neural Networks. San Francisco (CA). 1993. pp. 586-591.
M. Riedmiller. “Advanced supervised learning in multi-layer perceptrons – from backpropagation to adaptive learning algorithms”. Computer Standards and Interfaces. Vol. 16. 1994. pp. 265-278. DOI: https://doi.org/10.1016/0920-5489(94)90017-5
C. Igel, M. Hüsken. “Improving the Rprop learning algorithm”. Proceedings of the Second International Symposium on Neural Computation, NC2000, ICSC Academic Press. Berlín. pp. 115-121.
D. Ortíz, F. Villa, J. Velásquez. “A Comparison between Evolutionary Strategies and RPROP for Estimating Neural Networks”. Avances en Sistemas e Informática. Vol.4. 2007. pp. 135-144.
F. Hoffmeister, J. Sprave. “Problem-independent handling of constraints by use of metric penalty functions”. L. J. Fogel, P. J. Angeline, T. Bäck (Editores). Proceedings of the Fofth Annaul Conference on Evolutionary Programming. (EP΄96). San Diego (CA). MIT press. 1996. pp. 289-294.
J. Bracken, G. McCormick. Selected Applications of Nonlinear Programming. Ed. Jhon Wiley & Sons. Inc. New York. 1968. pp. 16.
H. Rosenbrock. “An Automatic Method for Finding the Greatest and Least value of a Function”. Computer Journal. Vol. 3. 1960. pp. 175-184. DOI: https://doi.org/10.1093/comjnl/3.3.175
A. R. Colville. A comparative study on nonlinear programming codes. IBM Scientific Center Report No. 320-2949. New York. 1968. pp. 12.
D. Paviani. A new method for the solution of the general nonlinear programming problem. Ph.D. dissertation. The University of Texas. Austin. Texas. 1969. pp. 18.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2018 Revista Facultad de Ingeniería

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Revista Facultad de Ingeniería, Universidad de Antioquia is licensed under the Creative Commons Attribution BY-NC-SA 4.0 license. https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en
You are free to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
The material published in the journal can be distributed, copied and exhibited by third parties if the respective credits are given to the journal. No commercial benefit can be obtained and derivative works must be under the same license terms as the original work.