The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
This paper studies Markov stopping games with two players on a denumerable state space. At each decision time player II has two actions: to stop the game paying a terminal reward to player I, or to let the system to continue it evolution. In this latter case, player I selects an action affecting the transitions and charges a running reward to player II. The performance of each pair of strategies is measured by the risk-sensitive total expected reward of player I. Under mild continuity and compactness...
We consider a two-player zero-sum-game in a bounded open domain Ω
described as follows: at a point x ∈ Ω, Players I and II
play an ε-step tug-of-war game with probability α, and
with probability β (α + β = 1), a
random point in the ball of radius ε centered at x is
chosen. Once the game position reaches the boundary, Player II pays Player I the amount
given by a fixed payoff function F. We give a detailed proof of the fact
that...
We consider a two-player zero-sum-game in a bounded open domain Ω described as follows: at a point x ∈ Ω, Players I and II play an ε-step tug-of-war game with probability α, and with probability β (α + β = 1), a random point in the ball of radius ε centered at x is chosen. Once the game position reaches the boundary, Player II pays Player I the amount given by a fixed payoff function F. We give a detailed proof of the fact that the value functions of this game satisfy the Dynamic Programming Principle...
We consider a two-player zero-sum-game in a bounded open domain Ω
described as follows: at a point x ∈ Ω, Players I and II
play an ε-step tug-of-war game with probability α, and
with probability β (α + β = 1), a
random point in the ball of radius ε centered at x is
chosen. Once the game position reaches the boundary, Player II pays Player I the amount
given by a fixed payoff function F. We give a detailed proof of the fact
that...
Currently displaying 1 –
4 of
4