Information-type divergence when the likelihood ratios are bounded

Andrew Rukhin

Applicationes Mathematicae (1997)

  • Volume: 24, Issue: 4, page 415-423
  • ISSN: 1233-7234

Abstract

top
The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of familiar ϕ-divergences. An extension of this inequality to ϕ-divergences between a finite number of probability distributions with pairwise bounded likelihood ratios is also given.

How to cite

top

Rukhin, Andrew. "Information-type divergence when the likelihood ratios are bounded." Applicationes Mathematicae 24.4 (1997): 415-423. <http://eudml.org/doc/219181>.

@article{Rukhin1997,
abstract = {The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of familiar ϕ-divergences. An extension of this inequality to ϕ-divergences between a finite number of probability distributions with pairwise bounded likelihood ratios is also given.},
author = {Rukhin, Andrew},
journal = {Applicationes Mathematicae},
keywords = {information measures; multiple decisions; convexity; likelihood ratio},
language = {eng},
number = {4},
pages = {415-423},
title = {Information-type divergence when the likelihood ratios are bounded},
url = {http://eudml.org/doc/219181},
volume = {24},
year = {1997},
}

TY - JOUR
AU - Rukhin, Andrew
TI - Information-type divergence when the likelihood ratios are bounded
JO - Applicationes Mathematicae
PY - 1997
VL - 24
IS - 4
SP - 415
EP - 423
AB - The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of familiar ϕ-divergences. An extension of this inequality to ϕ-divergences between a finite number of probability distributions with pairwise bounded likelihood ratios is also given.
LA - eng
KW - information measures; multiple decisions; convexity; likelihood ratio
UR - http://eudml.org/doc/219181
ER -

References

top
  1. [1] D. A. Bloch and L. E. Moses, Nonoptimally weighted least squares, Amer. Statist. 42 (1988), 50-53. 
  2. [2] T. M. Cover, M. A. Freedman and M. E. Hellman, Optimal finite memory learning algorithms for the finite sample problem, Information Control 30 (1976), 49-85. Zbl0332.62017
  3. [3] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, New York, 1991. Zbl0762.94001
  4. [4] L. Devroy, Non-Uniform Random Variate Generation, Springer, New York, 1986. 
  5. [5] G. S. Fishman, Monte Carlo: Concepts, Algorithms and Applications, Springer, New York, 1996. 
  6. [6] L. Györfi and T. Nemetz, f-dissimilarity: A generalization of the affinity of several distributions, Ann. Inst. Statist. Math. 30 (1978), 105-113. Zbl0453.62014
  7. [7] G. Pólya and G. Szegő, Problems and Theorems in Analysis. Volume 1: Series, Integral Calculus, Theory of Functions, Springer, New York, 1972. Zbl0236.00003
  8. [8] A. L. Rukhin, Lower bound on the error probability for families with bounded likelihood ratios, Proc. Amer. Math. Soc. 119 (1993), 1307-1314. Zbl0816.62005
  9. [9] A. L. Rukhin, Recursive testing of multiple hypotheses: Consistency and efficiency of the Bayes rule, Ann. Statist. 22 (1994), 616-633. Zbl0815.62009
  10. [10] A. L. Rukhin, Change-point estimation: linear statistics and asymptotic Bayes risk, Math. Methods Statist. 5 (1996), 412-431. Zbl0884.62028
  11. [11] J. W. Tukey, Approximate weights, Ann. Math. Statist. 19 (1948), 91-92. Zbl0041.25901
  12. [12] I. Vajda, Theory of Statistical Inference and Information, Kluwer, Dordrecht, 1989. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.