# Modeling shortest path games with Petri nets: a Lyapunov based theory

International Journal of Applied Mathematics and Computer Science (2006)

- Volume: 16, Issue: 3, page 387-397
- ISSN: 1641-876X

## Access Full Article

top## Abstract

top## How to cite

topClempner, Julio. "Modeling shortest path games with Petri nets: a Lyapunov based theory." International Journal of Applied Mathematics and Computer Science 16.3 (2006): 387-397. <http://eudml.org/doc/207801>.

@article{Clempner2006,

abstract = {In this paper we introduce a new modeling paradigm for shortest path games representation with Petri nets. Whereas previous works have restricted attention to tracking the net using Bellman's equation as a utility function, this work uses a Lyapunov-like function. In this sense, we change the traditional cost function by a trajectory-tracking function which is also an optimal cost-to-target function. This makes a significant difference in the conceptualization of the problem domain, allowing the replacement of the Nash equilibrium point by the Lyapunov equilibrium point in game theory. We show that the Lyapunov equilibrium point coincides with the Nash equilibrium point. As a consequence, all properties of equilibrium and stability are preserved in game theory. This is the most important contribution of this work. The potential of this approach remains in its formal proof simplicity for the existence of an equilibrium point.},

author = {Clempner, Julio},

journal = {International Journal of Applied Mathematics and Computer Science},

keywords = {Lyapunov equilibrium point; shortest path game; game theory; Bellman's equation; Nash equilibriumpoint; Lyapunov-like fuction; stability; Nash equilibrium point},

language = {eng},

number = {3},

pages = {387-397},

title = {Modeling shortest path games with Petri nets: a Lyapunov based theory},

url = {http://eudml.org/doc/207801},

volume = {16},

year = {2006},

}

TY - JOUR

AU - Clempner, Julio

TI - Modeling shortest path games with Petri nets: a Lyapunov based theory

JO - International Journal of Applied Mathematics and Computer Science

PY - 2006

VL - 16

IS - 3

SP - 387

EP - 397

AB - In this paper we introduce a new modeling paradigm for shortest path games representation with Petri nets. Whereas previous works have restricted attention to tracking the net using Bellman's equation as a utility function, this work uses a Lyapunov-like function. In this sense, we change the traditional cost function by a trajectory-tracking function which is also an optimal cost-to-target function. This makes a significant difference in the conceptualization of the problem domain, allowing the replacement of the Nash equilibrium point by the Lyapunov equilibrium point in game theory. We show that the Lyapunov equilibrium point coincides with the Nash equilibrium point. As a consequence, all properties of equilibrium and stability are preserved in game theory. This is the most important contribution of this work. The potential of this approach remains in its formal proof simplicity for the existence of an equilibrium point.

LA - eng

KW - Lyapunov equilibrium point; shortest path game; game theory; Bellman's equation; Nash equilibriumpoint; Lyapunov-like fuction; stability; Nash equilibrium point

UR - http://eudml.org/doc/207801

ER -

## References

top- Axelrod R. (1984): The Evolution of Cooperation. - NewYork: Basic Books.
- Bellman R.E. (1957): Dynamic Programming. - Princeton: Princeton University Press. Zbl0077.13605
- Bertsekas D.P. and Shreve S.E. (1978): Stochastic Optimal Control: The Discrete Time Case. - New York: Academic Press. Zbl0471.93002
- Bertsekas D.P. (1987): Dynamic Programming: Deterministic and Stochastic Models. - Englewood Cliffs: Prentice-Hall. Zbl0649.93001
- Bertsekas D.P. and Tsitsiklis J.N. (1989): Parallel and Distributed Computation: Numerical Methods. - Englewood Cliffs: Prentice-Hall. Zbl0743.65107
- Bertsekas D.P. and Tsitsiklis J.N. (1991): An analysis of stochastic shortest path problems. - Math. Oper. Res., Vol. 16, No. 3,pp. 580-595. Zbl0751.90077
- Blackwell D. (1967): Positive dynamicprogramming. - Proc. 5th Berkeley Symp. Math., Statist., and Probability, Berkeley, California, Vol. 1, pp. 415-418.
- Clempner J. (2005): Colored decision process petri nets: modeling, analysis and stability. - Int. J. Appl. Math. Comput. Sci., Vol. 15, No. 3, pp. 405-420. Zbl1169.93371
- Clempner J. (2006): Towards modeling the shortest path problem and games with petri nets. - Proc. Doctoral Consortium at 27-th Int. Conf. Application and Theory of Petri Nets and Other Models Of Concurrency, Turku, Finland, pp. 1-12.
- Derman C. (1970): Finite State Markovian Decision Processes. - New York: Academic Press. Zbl0262.90001
- Dynkin E.B. (1963): The optimum choice of the instant forstopping a Markov process. - Soviet Math. Doklady, Vol. 150, pp. 238-240.
- Eaton J.H. and Zadeh L.A. (1962): Optimal pursuit strategies indiscrete state probabilistic systems. - Trans. ASME Ser. D, J. Basic Eng., Vol. 84,pp. 23-29.
- Grigelionis R.I. and Shiryaev A.N. (1966): On Stefan'sproblem and optimal stopping rules for Markov processes. - Theory Prob. Applic., Vol. 11, pp. 541-558. Zbl0178.53303
- Hernandez-Lerma O. and Lasserre J.B. (1996): Discrete-Time Markov Control Process: Basic Optimality Criteria. - Berlin: Springer. Zbl0840.93001
- Hernandez-Lerma O., Carrasco G. and Pere-Hernandez R. (1999): Markov Control Processes with the Expected Total Cost Criterion: Optimality. - Stability and Transient Model. Acta Applicadae Matematicae, Vol. 59,No. 3, pp. 229-269. Zbl0964.93086
- Hernandez-Lerma O. and Lasserre J.B. (1999): Futher Topicson Discrete-Time Markov Control Process. - Berlin: Springer.
- Hinderer K. and Waldmann K.H (2003): The critical discount factor for finite Markovian decision process with an absorbing set.- Math. Meth. Oper. Res., Vol. 57, pp. 1-19. Zbl1023.90077
- Hinderer K. and Waldmann K.H. (2005): Algorithms for countable state Markov decision model with an absorbing set. - SIAM J. Contr.Optim., Vol. 43, pp. 2109-2131. Zbl1097.90067
- Howard R.A. (1960): Dynamic Programming and Markov Processes. - Cambridge, MA: MIT Press. Zbl0091.16001
- Kalman R.E. and Bertram J.E. (1960): Control system analysis and design via the 'Second Method' of Lyapunov. - J. Basic Eng.,ASME, Vol. 82(D), pp. 371-393.
- Kuhn H.W. and Nasae S. (Eds.) (2002): The Essential John Nash. - Princeton: Princeton University Press.
- Kumar P.R. and Shiau T.H. (1981): Zero sum dynamic games, In: Control and Dynamic Games, (C.T. Leondes, Ed.) - Academic Press, pp. 1345-1378. Zbl0549.90104
- Kushner H.J. and Chamberlain S.G. (1969): Finite state stochastic games: Existence theorems and computational procedures. - IEEE Trans. Automat. Contr., Vol. 14, No. 3. Zbl0182.20802
- Kushner H. (1971): Introduction to Stochastic Control. - New York: Holt, Rinehart and Winston. Zbl0293.93018
- Lakshmikantham S. Leela and Martynyuk A.A. (1990): Practical Stability of Nonlinear Systems. - Singapore: World Scientific. Zbl0753.34037
- Lakshmikantham V., Matrosov V.M. and Sivasundaram S. (1991): Vector Lyapunov Functions and Stability Analysis of Nonlinear Systems. - Dordrecht: Kluwer. Zbl0721.34054
- Mandl P. and Seneta E. (1969): The theory of non-negative matricesin a dynamic programming problem. - Austral. J. Stat., Vol. 11,pp. 85-96. Zbl0185.08003
- Nash J. (1951): Non-cooperative games. - Ann. Math., Vol. 54, pp. 287-295. Zbl0045.08202
- Nash J. (1996): Essays on Game Theory. - Cheltenham: Elgar.
- Pallu de la Barriere R. (1967): Optimal Control Theory. - Philadelphia: Saunders. Zbl0155.15203
- Patek S.D. (1997): Stochastic Shortest Path Games: Theory and Algorithms. - Ph.D. thesis, Dep. Electr. Eng. Comput.Sci., MIT, Cambridge, MA.
- Patek S.D. and Bertsekas D.P. (1999): Stochastic shortest pathgames. - SIAM J. Contr. Optim., Vol. 37, No. 3, pp. 804-824. Zbl0918.90148
- Patek S.D. (2001): On terminating Markov decision processes with a risk averse objective function. - Automatica, Vol. 37, No. 9, pp. 1379-1386. Zbl0995.93075
- Pliska S.R. (1978): On the transient case for Markov decision chains with general state space, In: Dynamic Programming and Its Applications, (M.L. Puterman, Ed.). - New York: Springer, pp 335-349.
- Puterman M.L. (1994): Markov Decision Processes: Discrete Stochastic Dynamic Programming. - New York: Wiley. Zbl0829.90134
- Rieder U. (1975): Bayesian dynamic programming. - Adv. Appl. Prob., Vol. 7, pp. 330-348. Zbl0316.90081
- Shapley L.S. (1953): Stochastic Games. - Proc. National. Acad. Sci., Mathematics, Vol. 39, pp. 1095-1100. Zbl0051.35805
- Shiryaev A.N. (1978): Optimal Stopping Problems. - New York: Springer. Zbl0391.60002
- Strauch R. (1966): Negative dynamic programming. - Ann. Math.Stat., Vol. 37, pp. 871-890. Zbl0144.43201
- Van der Wal J. (1981): Stochastic dynamic programming. - Math. Centre Tracts 139, Mathematisch Centrum, Amsterdam. Zbl0462.90055
- Veinott A.F., Jr. (1969): Discrete dynamic programming with sensitive discount optimality criteria. - Ann. Math. Stat., Vol. 40, No. 5, pp. 1635-1660. Zbl0183.49102
- Whittle P. (1983): Optimization over Time. - New York: Wiley, Vol. 2. Zbl0577.90046

## Citations in EuDML Documents

top## NotesEmbed ?

topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.