Generalized Jensen difference based on entropy functions
Integral functionals based on convex normal integrands are minimized subject to finitely many moment constraints. The integrands are finite on the positive and infinite on the negative numbers, strictly convex but not necessarily differentiable. The minimization is viewed as a primal problem and studied together with a dual one in the framework of convex duality. The effective domain of the value function is described by a conic core, a modification of the earlier concept of convex core. Minimizers...
Se introduce en este trabajo una axiomatización de las medidas de incertidumbre (información a priori) condicionada por una experiencia que generaliza la dada para la incertidumbre condicionada por un suceso. El concepto de medidas de incertidumbre condicionalmente componibles permite, en determinadas condiciones (componibilidad de tipo M), una construcción de las mismas. Por último se analizan diversos ejemplos (medidas de Shannon, Renyi, etc.), constatándose la igualdad de las construcciones dadas...
This paper deals with the concept of the “size“ or “extent“ of the information in the sense of measuring the improvement of our knowledge after obtaining a message. Standard approaches are based on the probabilistic parameters of the considered information source. Here we deal with situations when the unknown probabilities are subjectively or vaguely estimated. For the considered fuzzy quantities valued probabilities we introduce and discuss information theoretical concepts.
The structures of the fuzzy information theory are focused on the concept of fuzzy entropy, where the individual information of symbols is considered only implicitely. This paper aims to fill this gap and to study the concepts of fuzzy information. Special attention is paid to the typical fuzzy set theoretical paradigma of monotonicity of operations.
The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of...
The notion of relative measure of information in an abstract information space with generalized independence law is studied. The axiomatic definition is given and the form of dependence on the absolute measures is determined, as a solution of a system of functional equations.
General quantum measurements are represented by instruments. In this paper the mathematical formalization is given of the idea that an instrument is a channel which accepts a quantum state as input and produces a probability and an a posteriori state as output. Then, by using mutual entropies on von Neumann algebras and the identification of instruments and channels, many old and new informational inequalities are obtained in a unified manner. Such inequalities involve various quantities which characterize...
The exact range of the joined values of several Rényi entropies is determined. The method is based on topology with special emphasis on the orientation of the objects studied. Like in the case when only two orders of the Rényi entropies are studied, one can parametrize the boundary of the range. An explicit formula for a tight upper or lower bound for one order of entropy in terms of another order of entropy cannot be given.
En este trabajo se establece un criterio de comparación entre sistemas de información difusa basado en la maximización de la energía informacional esperada de orden α y tipo β y se comprueba que verifica las propiedades más relevantes que a nuestro juicio debe verificar un criterio de comparación.
In this paper the Kagan divergence measure is extended in order to establish a measure of the information that a random sample gives about a Dirichlet process as a whole. After studying some of its properties, the expression obtained in sampling from the step n to the step n+1 is given, and its Bayesian properties are studied. We finish proving the good behaviour of a stopping rule defined on the basis of the information obtained in sampling when passing from a step to the following.