The Shannon information on a Markov chain approximately normally distributed
Richard O'Neil (1990)
Colloquium Mathematicae
Similarity:
Richard O'Neil (1990)
Colloquium Mathematicae
Similarity:
Rajmund Drenyovszki, Lóránt Kovács, Kálmán Tornai, András Oláh, István Pintér (2017)
Kybernetika
Similarity:
In our paper we investigate the applicability of independent and identically distributed random sequences, first order Markov and higher order Markov chains as well as semi-Markov processes for bottom-up electricity load modeling. We use appliance time series from publicly available data sets containing fine grained power measurements. The comparison of models are based on metrics which are supposed to be important in power systems like Load Factor, Loss of Load Probability. Furthermore,...
Brian Marcus, Selim Tuncel (1990)
Inventiones mathematicae
Similarity:
Tim Austin (2015)
Studia Mathematica
Similarity:
A number of recent works have sought to generalize the Kolmogorov-Sinai entropy of probability-preserving transformations to the setting of Markov operators acting on the integrable functions on a probability space (X,μ). These works have culminated in a proof by Downarowicz and Frej that various competing definitions all coincide, and that the resulting quantity is uniquely characterized by certain abstract properties. On the other hand, Makarov has shown that this...
Giovanni Masala, Giuseppina Cannas, Marco Micocci (2014)
Biometrical Letters
Similarity:
In this paper we apply a parametric semi-Markov process to model the dynamic evolution of HIV-1 infected patients. The seriousness of the infection is rendered by the CD4+ T-lymphocyte counts. For this purpose we introduce the main features of nonhomogeneous semi-Markov models. After determining the transition probabilities and the waiting time distributions in each state of the disease, we solve the evolution equations of the process in order to estimate the interval transition probabilities....
Guglielmo D'Amico (2014)
Applications of Mathematics
Similarity:
Markov chain usage models were successfully used to model systems and software. The most prominent approaches are the so-called failure state models Whittaker and Thomason (1994) and the arc-based Bayesian models Sayre and Poore (2000). In this paper we propose arc-based semi-Markov usage models to test systems. We extend previous studies that rely on the Markov chain assumption to the more general semi-Markovian setting. Among the obtained results we give a closed form representation...
D'Amico, Guglielmo, Janssen, Jacques, Manca, Raimondo (2009)
Journal of Applied Mathematics and Decision Sciences
Similarity:
Monica E. Dumitrescu (1988)
Časopis pro pěstování matematiky
Similarity:
Mike Boyle, Jérôme Buzzi, Ricardo Gómez (2014)
Colloquium Mathematicae
Similarity:
We show that strongly positively recurrent Markov shifts (including shifts of finite type) are classified up to Borel conjugacy by their entropy, period and their numbers of periodic points.
Antoni Donigiewicz (2004)
Control and Cybernetics
Similarity:
Brahim Ouhbi, Ali Boudi, Mohamed Tkiouat (2007)
RAIRO - Operations Research
Similarity:
In this paper we, firstly, present a recursive formula of the empirical estimator of the semi-Markov kernel. Then a non-parametric estimator of the expected cumulative operational time for semi-Markov systems is proposed. The asymptotic properties of this estimator, as the uniform strongly consistency and normality are given. As an illustration example, we give a numerical application.
Laurent Mazliak (2007)
Revue d'histoire des mathématiques
Similarity:
We present the letters sent by Wolfgang Doeblin to Bohuslav Hostinský between 1936 and 1938. They concern some aspects of the general theory of Markov chains and the solutions of the Chapman-Kolmogorov equation that Doeblin was then establishing for his PhD thesis.
Karel Sladký (2017)
Kybernetika
Similarity:
The article is devoted to Markov reward chains in discrete-time setting with finite state spaces. Unfortunately, the usual optimization criteria examined in the literature on Markov decision chains, such as a total discounted, total reward up to reaching some specific state (called the first passage models) or mean (average) reward optimality, may be quite insufficient to characterize the problem from the point of a decision maker. To this end it seems that it may be preferable if not...