Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent
International Journal of Applied Mathematics and Computer Science (2008)
- Volume: 18, Issue: 3, page 409-421
- ISSN: 1641-876X
Access Full Article
topAbstract
topHow to cite
topSilvana P Etruseva. "Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent." International Journal of Applied Mathematics and Computer Science 18.3 (2008): 409-421. <http://eudml.org/doc/207896>.
@article{SilvanaPEtruseva2008,
abstract = {The paper presents an algorithm which solves the shortest path problem in an arbitrary deterministic environment with n states with an emotional agent in linear time. The algorithm originates from an algorithm which in exponential time solves the same problem, and the agent architecture used for solving the problem is an NN-CAA architecture (neural network crossbar adaptive array). By implementing emotion learning, the linear time algorithm is obtained and the agent architecture is modified. The complexity of the algorithm without operations for initiation in general does not depend on the number of states n, but only on the length of the shortest path. Depending on the position of the goal state, the complexity can be at most O(n). It can be concluded that the choice of the function which evaluates the emotional state of the agent plays a decisive role in solving the problem efficiently. That function should give as detailed information as possible about the consequences of the agent's actions, starting even from the initial state. In this way the function implements properties of human emotions.},
author = {Silvana P Etruseva},
journal = {International Journal of Applied Mathematics and Computer Science},
keywords = {emotional agent; complexity; consequence programming; CAA neural network; planning},
language = {eng},
number = {3},
pages = {409-421},
title = {Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent},
url = {http://eudml.org/doc/207896},
volume = {18},
year = {2008},
}
TY - JOUR
AU - Silvana P Etruseva
TI - Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent
JO - International Journal of Applied Mathematics and Computer Science
PY - 2008
VL - 18
IS - 3
SP - 409
EP - 421
AB - The paper presents an algorithm which solves the shortest path problem in an arbitrary deterministic environment with n states with an emotional agent in linear time. The algorithm originates from an algorithm which in exponential time solves the same problem, and the agent architecture used for solving the problem is an NN-CAA architecture (neural network crossbar adaptive array). By implementing emotion learning, the linear time algorithm is obtained and the agent architecture is modified. The complexity of the algorithm without operations for initiation in general does not depend on the number of states n, but only on the length of the shortest path. Depending on the position of the goal state, the complexity can be at most O(n). It can be concluded that the choice of the function which evaluates the emotional state of the agent plays a decisive role in solving the problem efficiently. That function should give as detailed information as possible about the consequences of the agent's actions, starting even from the initial state. In this way the function implements properties of human emotions.
LA - eng
KW - emotional agent; complexity; consequence programming; CAA neural network; planning
UR - http://eudml.org/doc/207896
ER -
References
top- Botelho L.M and Coelho H. (1998). Adaptive agents: Emotion learning, Association for the Advancement of Artificial Intelligence, pp. 19-24.
- Bozinovski S. (1995). Consequence Driven Systems, Teaching, Learning and Self-Learning Agent, Gocmar Press, Bitola.
- Bozinovski S. (1982). A self learning system using secondary reinforcement, in: E. Trappl (Ed.) Cybernetics and Systems Research, North-Holland Publishing Company, pp. 397-402.
- Bozinovski S. and Schoell P. (1999a). Emotions and hormones in learning. GMD Forschungszentrum Informationstechnik GmbH, Sankt Augustin.
- Bozinovski S. (1999b). Crossbar Adaptive Array: The first connectionist network that solved the delayed reinforcement learning problem, in: A. Dobnikar, N. Steele, D. Pearson, R. Albrecht (Eds.), Artificial Neural Nets and Genetic Algorithms, Springer, pp. 320-325.
- Bozinovski S. (2002). Motivation and emotion in anticipatory behaviour of consequence driven systems, Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems, Edinburgh, Scotland, pp. 100-119.
- Bozinovski S. (2003). Anticipation driven artifical personality: Building on Lewin and Loehlin, in: M. Butz, O. Sigaud, P. Gerard (Eds.) Anticipatory Behaviour in Adaptive Learning Systems, LNAI 2684, Springer-Verlag, Berlin/Heilderberg, pp. 133-150.
- Glaser J. (1963). General Psychopathology, Narodne Novine, Zagreb, (in Croatian).
- Jorgen B.J and Gutin G. (2001). Digraphs, Theory, Algorithms and Applications, Springer-Verlag, London. Zbl0958.05002
- Koenig S. and Simmons R. (1992). Complexity Analysis of RealTime Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains, Carnegie Mellon University, Pittsburgh.
- Peng J. and Williams R. (1993). Efficient learning and planning with the Dyna fmework. Proceedings of the 2nd International Conference on Simulation of Adaptive Behaviour: From Animals to Animates, Hawaii, pp. 437-454
- Petruseva S. and Bozinovski S. (2000). Consequence programming: Algorithm “at subgoal go back”. Mathematics Bulletin, Book 24 (L), pp. 141-152. Zbl1249.68320
- Petruseva S. (2006a). Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent, Yugoslav Journal of Operations Research, 16(2): 211-226 Zbl1265.68296
- Petruseva S. (2006b). Consequence programming: Solving a shortest path problem in polynomial time using emotional learning, International Journal of Pure and Applied Mathematics, 29(4): 491-520. Zbl1110.68140
- Sutton R. and Barto A. (1998). Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA.
- Whitehead S. (1991). A complexity analysis of cooperative mechanisms in reinforcement learning, Proceedings of AAAI, pp. 607-613.
- Whitehead S. (1992). Reinforcement learning for the adaptive control of perception and action, Ph.D. thesis, University of Rochester.
- Wittek G. (1995). Me, Me, Me, the Spider in the Web. The Law of Correspondence, and the Law of Projection, Verlag DAS WORT, GmbH, Marktheidenfeld-Altfeld.
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.