Stationary optimal policies in a class of multichain positive dynamic programs with finite state space and risk-sensitive criterion

Volume 28 / 2001

Rolando Cavazos-Cadena, Raul Montes-de-Oca Applicationes Mathematicae 28 (2001), 93-109 MSC: 93E20, 90C40. DOI: 10.4064/am28-1-7


This work concerns Markov decision processes with finite state space and compact action sets. The decision maker is supposed to have a constant-risk sensitivity coefficient, and a control policy is graded via the risk-sensitive expected total-reward criterion associated with nonnegative one-step rewards. Assuming that the optimal value function is finite, under mild continuity and compactness restrictions the following result is established: If the number of ergodic classes when a stationary policy is used to drive the system depends continuously on the policy employed, then there exists an optimal stationary policy, extending results obtained by Schal (1984) for risk-neutral dynamic programming. We use results recently established for unichain systems, and analyze the general multichain case via a reduction to a model with the unichain property.


  • Rolando Cavazos-CadenaDepartamento de Estadistica y Calculo
    Universidad Autonoma Agraria Antonio Narro
    Buenavista, Saltillo COAH 25315, Mexico
  • Raul Montes-de-OcaDepartamento de Matematicas
    Universidad Autonoma Metropolitana
    Campus Iztapalapa
    Avenida Michoacan y La Purisima s//n
    Col. Vicentina
    Mexico D.F. 09340, Mexico

Search for IMPAN publications

Query phrase too short. Type at least 4 characters.

Rewrite code from the image

Reload image

Reload image