Information-type divergence when the likelihood ratios are bounded

Volume 24 / 1997

Andrew Rukhin Applicationes Mathematicae 24 (1997), 415-423 DOI: 10.4064/am-24-4-415-423

Abstract

The so-called ϕ-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a ϕ-divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of familiar ϕ-divergences. An extension of this inequality to ϕ-divergences between a finite number of probability distributions with pairwise bounded likelihood ratios is also given.

Authors

  • Andrew Rukhin

Search for IMPAN publications

Query phrase too short. Type at least 4 characters.

Rewrite code from the image

Reload image

Reload image