Displaying similar documents to “An optimal strong equilibrium solution for cooperative multi-leader-follower Stackelberg Markov chains games”

Nash -equilibria for stochastic games with total reward functions: an approach through Markov decision processes

Francisco J. González-Padilla, Raúl Montes-de-Oca (2019)

Kybernetika

Similarity:

The main objective of this paper is to find structural conditions under which a stochastic game between two players with total reward functions has an ϵ -equilibrium. To reach this goal, the results of Markov decision processes are used to find ϵ -optimal strategies for each player and then the correspondence of a better answer as well as a more general version of Kakutani’s Fixed Point Theorem to obtain the ϵ -equilibrium mentioned. Moreover, two examples to illustrate the theory developed...

Tangential Markov inequality in L p norms

Agnieszka Kowalska (2015)

Banach Center Publications

Similarity:

In 1889 A. Markov proved that for every polynomial p in one variable the inequality | | p ' | | [ - 1 , 1 ] ( d e g p ) ² | | p | | [ - 1 , 1 ] is true. Moreover, the exponent 2 in this inequality is the best possible one. A tangential Markov inequality is a generalization of the Markov inequality to tangential derivatives of certain sets in higher-dimensional Euclidean spaces. We give some motivational examples of sets that admit the tangential Markov inequality with the sharp exponent. The main theorems show that the results on certain arcs...

Empirical approximation in Markov games under unbounded payoff: discounted and average criteria

Fernando Luque-Vásquez, J. Adolfo Minjárez-Sosa (2017)

Kybernetika

Similarity:

This work deals with a class of discrete-time zero-sum Markov games whose state process x t evolves according to the equation x t + 1 = F ( x t , a t , b t , ξ t ) , where a t and b t represent the actions of player 1 and 2, respectively, and ξ t is a sequence of independent and identically distributed random variables with unknown distribution θ . Assuming possibly unbounded payoff, and using the empirical distribution to estimate θ , we introduce approximation schemes for the value of the game as well as for optimal strategies considering...

Applications of limited information strategies in Menger's game

Steven Clontz (2017)

Commentationes Mathematicae Universitatis Carolinae

Similarity:

As shown by Telgársky and Scheepers, winning strategies in the Menger game characterize σ -compactness amongst metrizable spaces. This is improved by showing that winning Markov strategies in the Menger game characterize σ -compactness amongst regular spaces, and that winning strategies may be improved to winning Markov strategies in second-countable spaces. An investigation of 2-Markov strategies introduces a new topological property between σ -compact and Menger spaces.

The Nagaev-Guivarc’h method via the Keller-Liverani theorem

Loïc Hervé, Françoise Pène (2010)

Bulletin de la Société Mathématique de France

Similarity:

The Nagaev-Guivarc’h method, via the perturbation operator theorem of Keller and Liverani, has been exploited in recent papers to establish limit theorems for unbounded functionals of strongly ergodic Markov chains. The main difficulty of this approach is to prove Taylor expansions for the dominating eigenvalue of the Fourier kernels. The paper outlines this method and extends it by stating a multidimensional local limit theorem, a one-dimensional Berry-Esseen theorem, a first-order...

Evaluating default priors with a generalization of Eaton’s Markov chain

Brian P. Shea, Galin L. Jones (2014)

Annales de l'I.H.P. Probabilités et statistiques

Similarity:

We consider evaluating improper priors in a formal Bayes setting according to the consequences of their use. Let 𝛷 be a class of functions on the parameter space and consider estimating elements of 𝛷 under quadratic loss. If the formal Bayes estimator of every function in 𝛷 is admissible, then the prior is strongly admissible with respect to 𝛷 . Eaton’s method for establishing strong admissibility is based on studying the stability properties of a particular Markov chain associated with...

Distortion inequality for the Frobenius-Perron operator and some of its consequences in ergodic theory of Markov maps in d

Piotr Bugiel (1998)

Annales Polonici Mathematici

Similarity:

Asymptotic properties of the sequences (a) P φ j g j = 1 and (b) j - 1 i = 0 j - 1 P φ g j = 1 , where P φ : L ¹ L ¹ is the Frobenius-Perron operator associated with a nonsingular Markov map defined on a σ-finite measure space, are studied for g ∈ G = f ∈ L¹: f ≥ 0 and ⃦f ⃦ = 1. An operator-theoretic analogue of Rényi’s Condition is introduced. It is proved that under some additional assumptions this condition implies the L¹-convergence of the sequences (a) and (b) to a unique g₀ ∈ G. The general result is applied to some smooth Markov...

Markov's property for kth derivative

Mirosław Baran, Beata Milówka, Paweł Ozorka (2012)

Annales Polonici Mathematici

Similarity:

Consider the normed space ( ( N ) , | | · | | ) of all polynomials of N complex variables, where || || a norm is such that the mapping L g : ( ( N ) , | | · | | ) f g f ( ( N ) , | | · | | ) is continuous, with g being a fixed polynomial. It is shown that the Markov type inequality | / z j P | | M ( d e g P ) m | | P | | , j = 1,...,N, P ( N ) , with positive constants M and m is equivalent to the inequality | | N / z . . . z N P | | M ' ( d e g P ) m ' | | P | | , P ( N ) , with some positive constants M’ and m’. A similar equivalence result is obtained for derivatives of a fixed order k ≥ 2, which can be more specifically formulated in the language of normed algebras....

On the central limit theorem for some birth and death processes

Tymoteusz Chojecki (2011)

Annales Universitatis Mariae Curie-Sklodowska, sectio A – Mathematica

Similarity:

Suppose that { X n : n 0 } is a stationary Markov chain and V is a certain function on a phase space of the chain, called an observable. We say that the observable satisfies the central limit theorem (CLT) if Y n : = N - 1 / 2 n = 0 N V ( X n ) converge in law to a normal random variable, as N + . For a stationary Markov chain with the L 2 spectral gap the theorem holds for all V such that V ( X 0 ) is centered and square integrable, see Gordin [7]. The purpose of this article is to characterize a family of observables V for which the CLT holds...

The scaling limits of a heavy tailed Markov renewal process

Julien Sohier (2013)

Annales de l'I.H.P. Probabilités et statistiques

Similarity:

In this paper we consider heavy tailed Markov renewal processes and we prove that, suitably renormalised, they converge in law towards the α -stable regenerative set. We then apply these results to the strip wetting model which is a random walk S constrained above a wall and rewarded or penalized when it hits the strip [ 0 , ) × [ 0 , a ] where a is a given positive number. The convergence result that we establish allows to characterize the scaling limit of this process at criticality.