On the bilateral truncated exponential distribution.
The asymptotic Rényi distances are explicitly defined and rigorously studied for a convenient class of Gibbs random fields, which are introduced as a natural infinite-dimensional generalization of exponential distributions.
The Fisher informational metric is unique in some sense (it is the only Markovian monotone distance) in the classical case. A family of Riemannian metrics is called monotone if its members are decreasing under stochastic mappings. These are the metrics to play the role of Fisher metric in the quantum case. Monotone metrics can be labeled by special operator monotone functions, according to Petz's Classification Theorem. The aim of this paper is to present an idea how one can narrow the set of monotone...
We establish a decomposition of the Jensen-Shannon divergence into a linear combination of a scaled Jeffreys' divergence and a reversed Jensen-Shannon divergence. Upper and lower bounds for the Jensen-Shannon divergence are then found in terms of the squared (total) variation distance. The derivations rely upon the Pinsker inequality and the reverse Pinsker inequality. We use these bounds to prove the asymptotic equivalence of the maximum likelihood estimate and minimum Jensen-Shannon divergence...
In this paper the mean and the variance of the Maximum Likelihood Estimator (MLE) of Kullback information measure and measure of relative "useful" information are obtained.
We establish the optimal quantization problem for probabilities under constrained Rényi--entropy of the quantizers. We determine the optimal quantizers and the optimal quantization error of one-dimensional uniform distributions including the known special cases (restricted codebook size) and (restricted Shannon entropy).
The information divergence of a probability measure from an exponential family over a finite set is defined as infimum of the divergences of from subject to . All directional derivatives of the divergence from are explicitly found. To this end, behaviour of the conjugate of a log-Laplace transform on the boundary of its domain is analysed. The first order conditions for to be a maximizer of the divergence from are presented, including new ones when is not projectable to .
This article studies exponential families on finite sets such that the information divergence of an arbitrary probability distribution from is bounded by some constant . A particular class of low-dimensional exponential families that have low values of can be obtained from partitions of the state space. The main results concern optimality properties of these partition exponential families. The case where is studied in detail. This case is special, because if , then contains all probability...
K. M. Wong and S. Chen [9] analyzed the Shannon entropy of a sequence of random variables under order restrictions. Using -entropies, I. J. Taneja [8], these results are generalized. Upper and lower bounds to the entropy reduction when the sequence is ordered and conditions under which they are achieved are derived. Theorems are presented showing the difference between the average entropy of the individual order statistics and the entropy of a member of the original independent identically distributed...
En Pardo (1984), se propuso un Plan de Muestreo Secuencial basado en la Energía Informacional (P.M.S.E.I.), análogo al propuesto por Lindley (1956, 1957) a partir de la Entropía de Shannon, con el fin de recoger información acerca de un parámetro desconocido θ. En esta comunicación se aplica el P.M.S.E.I. al caso concreto de la recogida de información acerca del parámetro θ de una distribución exponencial y se extiende el concepto de P.M.S.E.I. al caso en que el estadístico esté interesado en recoger...
Consideramos la conexión que existe entre la información de Kullback y los tests admisibles óptimos en el conjunto de riesgos de Neyman-Pearson, usando para ello el estudio de problemas de programación matemática de tipo infinito. Se obtienen resultados que caracterizan un subconjunto de soluciones Bayes como consecuencia del conocimiento de la información, así como una medida de discriminación entre hipótesis para el conjunto de riesgos.