Displaying similar documents to “Knowledge revision in Markov networks.”

A method for knowledge integration

Martin Janžura, Pavel Boček (1998)

Kybernetika

Similarity:

With the aid of Markov Chain Monte Carlo methods we can sample even from complex multi-dimensional distributions which cannot be exactly calculated. Thus, an application to the problem of knowledge integration (e. g. in expert systems) is straightforward.

Comparison of Sojourn Time Distributions in Modeling HIV/AIDS Disease Progression

Tilahun Ferede Asena, Ayele Taye Goshu (2017)

Biometrical Letters

Similarity:

An application of semi-Markov models to AIDS disease progression was utilized to find best sojourn time distributions. We obtained data on 370 HIV/AIDS patients who were under follow-up from September 2008 to August 2015, from Yirgalim General Hospital, Ethiopia. The study reveals that within the “good” states, the transition probability of moving from a given state to the next worst state has a parabolic pattern that increases with time until it reaches a maximum and then declines over...

Central limit theorem for hitting times of functionals of Markov jump processes

Christian Paroissin, Bernard Ycart (2004)

ESAIM: Probability and Statistics

Similarity:

A sample of i.i.d. continuous time Markov chains being defined, the sum over each component of a real function of the state is considered. For this functional, a central limit theorem for the first hitting time of a prescribed level is proved. The result extends the classical central limit theorem for order statistics. Various reliability models are presented as examples of applications.

Fast simulation for road traffic network

Roberta Jungblut-Hessel, Brigitte Plateau, William J. Stewart, Bernard Ycart (2001)

RAIRO - Operations Research - Recherche Opérationnelle

Similarity:

In this paper we present a method to perform fast simulation of large markovian systems. This method is based on the use of three concepts: Markov chain uniformization, event-driven dynamics, and modularity. An application of urban traffic simulation is presented to illustrate the performance of our approach.

On conditional independence and log-convexity

František Matúš (2012)

Annales de l'I.H.P. Probabilités et statistiques

Similarity:

If conditional independence constraints define a family of positive distributions that is log-convex then this family turns out to be a Markov model over an undirected graph. This is proved for the distributions on products of finite sets and for the regular Gaussian ones. As a consequence, the assertion known as Brook factorization theorem, Hammersley–Clifford theorem or Gibbs–Markov equivalence is obtained.

Qualitative reasoning in Bayesian networks.

Paolo Garbolino (1996)

Mathware and Soft Computing

Similarity:

Some probabilistic inference rules which can be compared with the inference rules of preferential logic are given and it will be shown how they work in graphical models, allowing qualitative plausible reasoning in Bayesian networks.

Hit and run as a unifying device

Hans C. Andersen, Persi Diaconis (2007)

Journal de la société française de statistique

Similarity:

We present a generalization of hit and run algorithms for Markov chain Monte Carlo problems that is ‘equivalent’ to data augmentation and auxiliary variables. These algorithms contain the Gibbs sampler and Swendsen-Wang block spin dynamics as special cases. The unification allows theorems, examples, and heuristics developed in one domain to illuminate parallel domains.