On the Bayesian inductive processes
The asymptotic Rényi distances are explicitly defined and rigorously studied for a convenient class of Gibbs random fields, which are introduced as a natural infinite-dimensional generalization of exponential distributions.
The Fisher informational metric is unique in some sense (it is the only Markovian monotone distance) in the classical case. A family of Riemannian metrics is called monotone if its members are decreasing under stochastic mappings. These are the metrics to play the role of Fisher metric in the quantum case. Monotone metrics can be labeled by special operator monotone functions, according to Petz's Classification Theorem. The aim of this paper is to present an idea how one can narrow the set of monotone...
We establish a decomposition of the Jensen-Shannon divergence into a linear combination of a scaled Jeffreys' divergence and a reversed Jensen-Shannon divergence. Upper and lower bounds for the Jensen-Shannon divergence are then found in terms of the squared (total) variation distance. The derivations rely upon the Pinsker inequality and the reverse Pinsker inequality. We use these bounds to prove the asymptotic equivalence of the maximum likelihood estimate and minimum Jensen-Shannon divergence...
In this paper the mean and the variance of the Maximum Likelihood Estimator (MLE) of Kullback information measure and measure of relative "useful" information are obtained.
In a previous paper, conditions have been given to compute iterated expectations of fuzzy random variables, irrespectively of the order of integration. In another previous paper, a generalized real-valued measure to quantify the absolute variation of a fuzzy random variable with respect to its expected value have been introduced and analyzed. In the present paper we combine the conditions and generalized measure above to state an extension of the basic Rao–Blackwell Theorem. An application of this...
Lehmann in [4] has generalised the notion of the unbiased estimator with respect to the assumed loss function. In [5] Singh considered admissible estimators of function λ-r of unknown parameter λ of gamma distribution with density f(x|λ, b) = λb-1 e-λx xb-1 / Γ(b), x>0, where b is a known parameter, for loss function L(λ-r, λ-r) = (λ-r - λ-r)2 / λ-2r.Goodman in [1] choosing three loss functions of different shape found unbiased Lehmann-estimators, of the variance σ2 of the normal distribution....
"A high quantile is a quantile of order q with q close to one." A precise constructive definition of high quantiles is given and optimal estimates are presented.
We establish the optimal quantization problem for probabilities under constrained Rényi--entropy of the quantizers. We determine the optimal quantizers and the optimal quantization error of one-dimensional uniform distributions including the known special cases (restricted codebook size) and (restricted Shannon entropy).
Some necessary and some sufficient conditions are established for the explicit construction and characterization of optimal solutions of multivariate transportation (coupling) problems. The proofs are based on ideas from duality theory and nonconvex optimization theory. Applications are given to multivariate optimal coupling problems w.r.t. minimal -type metrics, where fairly explicit and complete characterizations of optimal transportation plans (couplings) are obtained. The results are of interest...
The information divergence of a probability measure from an exponential family over a finite set is defined as infimum of the divergences of from subject to . All directional derivatives of the divergence from are explicitly found. To this end, behaviour of the conjugate of a log-Laplace transform on the boundary of its domain is analysed. The first order conditions for to be a maximizer of the divergence from are presented, including new ones when is not projectable to .