Displaying similar documents to “A method for knowledge integration”

Conditional problem for objective probability

Otakar Kříž (1998)

Kybernetika

Similarity:

Marginal problem (see [Kel]) consists in finding a joint distribution whose marginals are equal to the given less-dimensional distributions. Let’s generalize the problem so that there are given not only less-dimensional distributions but also conditional probabilities. It is necessary to distinguish between objective (Kolmogorov) probability and subjective (de Finetti) approach ([Col,Sco]). In the latter, the coherence problem incorporates both probabilities and conditional probabilities...

Hit and run as a unifying device

Hans C. Andersen, Persi Diaconis (2007)

Journal de la société française de statistique

Similarity:

We present a generalization of hit and run algorithms for Markov chain Monte Carlo problems that is ‘equivalent’ to data augmentation and auxiliary variables. These algorithms contain the Gibbs sampler and Swendsen-Wang block spin dynamics as special cases. The unification allows theorems, examples, and heuristics developed in one domain to illuminate parallel domains.

Knowledge revision in Markov networks.

Jörg Gebhardt, Christian Borgelt, Rudolf Kruse, Heinz Detmer (2004)

Mathware and Soft Computing

Similarity:

A lot of research in graphical models has been devoted to developing correct and efficient evidence propagation methods, like join tree propagation or bucket elimination. With these methods it is possible to condition the represented probability distribution on given evidence, a reasoning process that is sometimes also called focusing. In practice, however, there is the additional need to revise the represented probability distribution in order to reflect some knowledge changes by satisfying...

Improving predictive distributions.

Morris H. DeGroot (1980)

Trabajos de Estadística e Investigación Operativa

Similarity:

Consider a sequence of decision problems S, S, ... and suppose that in problem S the statistician must specify his predictive distribution F for some random variable X and make a decision based on that distribution. For example, X might be the return on some particular investment and the statistician must decide whether or not to make that investment. The random variables X, X, ... are assumed to be independent and completely unrelated. It is also assumed that each predictive distribution...