Markov decision processes under ambiguity
We consider statistical Markov Decision Processes where the decision maker is risk averse against model ambiguity. The latter is given by an unknown parameter which influences the transition law and the cost functions. Risk aversion is measured either by the entropic risk measure or by the Average Value at Risk. We show how to solve problems of this kind using a general minimax theorem. Under some continuity and compactness assumptions we prove the existence of an optimal (deterministic) policy and discuss its computation. We illustrate our results using an example from statistical decision theory.