Displaying 101 – 120 of 132

Showing per page

Refined non-homogeneous markovian models for a single-server type of software system with rejuvenation

Hiroyuki Okamura, S. Miyahara, T. Dohi (2002)

RAIRO - Operations Research - Recherche Opérationnelle

Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon a proactive fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. In this paper, we reconsider...

Refined non-homogeneous markovian models for a single-server type of software system with rejuvenation

Hiroyuki Okamura, S. Miyahara, T. Dohi (2010)

RAIRO - Operations Research

Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon a proactive fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. In this paper, we...

Risk probability optimization problem for finite horizon continuous time Markov decision processes with loss rate

Haifeng Huo, Xian Wen (2021)

Kybernetika

This paper presents a study the risk probability optimality for finite horizon continuous-time Markov decision process with loss rate and unbounded transition rates. Under drift condition, which is slightly weaker than the regular condition, as detailed in existing literature on the risk probability optimality Semi-Markov decision processes, we prove that the value function is the unique solution of the corresponding optimality equation, and demonstrate the existence of a risk probability optimization...

Risk-sensitive average optimality in Markov decision processes

Karel Sladký (2018)

Kybernetika

In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient; if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first moment of...

Sample path average optimality of Markov control processes with strictly unbounded cost

Oscar Vega-Amaya (1999)

Applicationes Mathematicae

We study the existence of sample path average cost (SPAC-) optimal policies for Markov control processes on Borel spaces with strictly unbounded costs, i.e., costs that grow without bound on the complement of compact subsets. Assuming only that the cost function is lower semicontinuous and that the transition law is weakly continuous, we show the existence of a relaxed policy with 'minimal' expected average cost and that the optimal average cost is the limit of discounted programs. Moreover, we...

Sample-path average cost optimality for semi-Markov control processes on Borel spaces: unbounded costs and mean holding times

Oscar Vega-Amaya, Fernando Luque-Vásquez (2000)

Applicationes Mathematicae

We deal with semi-Markov control processes (SMCPs) on Borel spaces with unbounded cost and mean holding time. Under suitable growth conditions on the cost function and the mean holding time, together with stability properties of the embedded Markov chains, we show the equivalence of several average cost criteria as well as the existence of stationary optimal policies with respect to each of these criteria.

Second Order optimality in Markov decision chains

Karel Sladký (2017)

Kybernetika

The article is devoted to Markov reward chains in discrete-time setting with finite state spaces. Unfortunately, the usual optimization criteria examined in the literature on Markov decision chains, such as a total discounted, total reward up to reaching some specific state (called the first passage models) or mean (average) reward optimality, may be quite insufficient to characterize the problem from the point of a decision maker. To this end it seems that it may be preferable if not necessary...

Semi-Markov control models with average costs

Fernando Luque-Vásquez, Onésimo Hernández-Lerma (1999)

Applicationes Mathematicae

This paper studies semi-Markov control models with Borel state and control spaces, and unbounded cost functions, under the average cost criterion. Conditions are given for (i) the existence of a solution to the average cost optimality equation, and for (ii) the existence of strong optimal control policies. These conditions are illustrated with a semi-Markov replacement model.

Stationary optimal policies in a class of multichain positive dynamic programs with finite state space and risk-sensitive criterion

Rolando Cavazos-Cadena, Raul Montes-de-Oca (2001)

Applicationes Mathematicae

This work concerns Markov decision processes with finite state space and compact action sets. The decision maker is supposed to have a constant-risk sensitivity coefficient, and a control policy is graded via the risk-sensitive expected total-reward criterion associated with nonnegative one-step rewards. Assuming that the optimal value function is finite, under mild continuity and compactness restrictions the following result is established: If the number of ergodic classes when a stationary policy...

Stochastic dynamic programming with random disturbances

Regina Hildenbrandt (2003)

Discussiones Mathematicae Probability and Statistics

Several peculiarities of stochastic dynamic programming problems where random vectors are observed before the decision ismade at each stage are discussed in the first part of this paper. Surrogate problems are given for such problems with distance properties (for instance, transportation problems) in the second part.

Strong average optimality criterion for continuous-time Markov decision processes

Qingda Wei, Xian Chen (2014)

Kybernetika

This paper deals with continuous-time Markov decision processes with the unbounded transition rates under the strong average cost criterion. The state and action spaces are Borel spaces, and the costs are allowed to be unbounded from above and from below. Under mild conditions, we first prove that the finite-horizon optimal value function is a solution to the optimality equation for the case of uncountable state spaces and unbounded transition rates, and that there exists an optimal deterministic...

Currently displaying 101 – 120 of 132