Page 1

Displaying 1 – 3 of 3

Showing per page

Deterministic Markov Nash equilibria for potential discrete-time stochastic games

Alejandra Fonseca-Morales (2022)

Kybernetika

In this paper, we study the problem of finding deterministic (also known as feedback or closed-loop) Markov Nash equilibria for a class of discrete-time stochastic games. In order to establish our results, we develop a potential game approach based on the dynamic programming technique. The identified potential stochastic games have Borel state and action spaces and possibly unbounded nondifferentiable cost-per-stage functions. In particular, the team (or coordination) stochastic games and the stochastic...

Dynamic Programming Principle for tug-of-war games with noise

Juan J. Manfredi, Mikko Parviainen, Julio D. Rossi (2012)

ESAIM: Control, Optimisation and Calculus of Variations

We consider a two-player zero-sum-game in a bounded open domain Ω described as follows: at a point x ∈ Ω, Players I and II play an ε-step tug-of-war game with probability α, and with probability β (α + β = 1), a random point in the ball of radius ε centered at x is chosen. Once the game position reaches the boundary, Player II pays Player I the amount given by a fixed payoff function F. We give a detailed proof of the fact that...

Dynamic Programming Principle for tug-of-war games with noise

Juan J. Manfredi, Mikko Parviainen, Julio D. Rossi (2012)

ESAIM: Control, Optimisation and Calculus of Variations

We consider a two-player zero-sum-game in a bounded open domain Ω described as follows: at a point x ∈ Ω, Players I and II play an ε-step tug-of-war game with probability α, and with probability β (α + β = 1), a random point in the ball of radius ε centered at x is chosen. Once the game position reaches the boundary, Player II pays Player I the amount given by a fixed payoff function F. We give a detailed proof of the fact that the value functions of this game satisfy the Dynamic Programming Principle...

Currently displaying 1 – 3 of 3

Page 1