Displaying similar documents to “Why L 1 view and what is next?”

Bayesian estimation of mixtures with dynamic transitions and known component parameters

Ivan Nagy, Evgenia Suzdaleva, Miroslav Kárný (2011)

Kybernetika

Similarity:

Probabilistic mixtures provide flexible “universal” approximation of probability density functions. Their wide use is enabled by the availability of a range of efficient estimation algorithms. Among them, quasi-Bayesian estimation plays a prominent role as it runs “naturally” in one-pass mode. This is important in on-line applications and/or extensive databases. It even copes with dynamic nature of components forming the mixture. However, the quasi-Bayesian estimation relies on mixing...

Information-type divergence when the likelihood ratios are bounded

Andrew Rukhin (1997)

Applicationes Mathematicae

Similarity:

The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several...

Some history of the hierarchical Bayesian methodology.

Irving John Good (1980)

Trabajos de Estadística e Investigación Operativa

Similarity:

A standard tecnique in subjective Bayesian methodology is for a subject (you) to make judgements of the probabilities that a physical probability lies in various intervals. In the Bayesian hierarchical technique you make probability judgements (of a higher type, order, level or stage) concerning the judgements of lower type. The paper will outline some of the history of this hierarchical technique with emphasis on the contributions by I. J. Good because I have read every word written...