BCC

Workshop on Recent problems of stochastic control theory

27.01.2019 - 02.02.2019 | Warsaw

Abstracts

An approach to infinite horizon risk-sensitive control of diffusions via the study of principal eigenvalues of elliptic operators

Ari Arapostathis, University of Texas at Austin

We consider the infinite horizon risk-sensitive problem for nondegenerate diffusions with a compact action space, and controlled through the drift. We show that certain monotonicity properties of the principal eigenvalue of the operator with respect to the potential function fully characterize the ergodic properties of the associated ground state diffusion and the unicity of the ground state. This allows us to extend various results in the literature for a class of viscous Hamilton-Jacobi equations of ergodic type with smooth coefficients to equations with measurable drift and potential.  These results facilitate the study of the infinite horizon risk-sensitive control problem. First, imposing only a structural assumption on the running cost function, namely near-monotonicity, we show that there always exists a solution to the risk-sensitive Hamilton--Jacobi--Bellman (HJB) equation, and that any minimizer in the Hamiltonian is optimal in the class of stationary Markov controls. Under the additional hypothesis that the coefficients of the diffusion are bounded, and satisfy a condition that limits (even though it still allows) transient behavior, we show that any minimizer in the Hamiltonian is optimal in the class of all admissible controls. In addition, we present a sufficient condition, under which the solution of the HJB is unique (up to a multiplicative constant), and establish the usual verification result.  We also present some variational results. The talk is based on joint work with Anup Biswas and Subhamay Saha.

Portfolio Optimization in Fractional and Rough Heston Models

Nicole Bauerle, Karslruhe Institute of Technology

We consider a fractional version of the Heston volatility model which is inspired by [1]. Within this model we treat portfolio optimization problems for power utility functions. Using a suitable representation of the fractional part, followed by a reasonable approximation we show that it is possible to cast the problem into the classical stochastic control framework. This approach is generic for fractional processes. We derive explicit solutions and obtain as a by-product the Laplace transform of the integrated volatility. In order to get rid of some undesirable features we introduce a new model for the rough path scenario which is based on the Marchaud fractional derivative as defined in [2]. We provide a numerical study to underline our results. The talk is based on a joint work with S. Desmettre.

[1] Guennoun, H., A. Jacquier, P. Roome, F. Shi, Asymptotic Behavior of the Fractional Heston Model. SIAM Journal on Financial Mathematics 9(3) (2018), 1017--1045.

[2] Samko, S. G., A.A. Kilbas, O.I. Marichev, O. I., Fractional integrals and derivatives: Theory and applications. Gordon and Breach Science Publishers (1993).

Tractability of continuous time optimal stopping problems

Denis Belomestny, Duisburg-Essen University

In this talk we show that in a multidimensional It\^o diffusion setting, continuous time optimal stopping problems with finite time horizon may be approximated with arbitrary accuracy without the curse of dimensionality. In this sense the multidimensional optimal stopping problem in continuous time over a finite time horizon may be considered to be tractable.

Relative value iteration for ergodic control
 
Vivek Borkar, IIT Mumbai
 

This talk will begin with an introduction to the relative value iteration for average cost or `ergodic' control of Markov chains on a discrete state space and then report some recent work, joint with Prof. Ari Arapostathis of Uni. of Texas at Austin, on its extension to controlled Markov chains in $mathcal{R}^d$.

On time-inconsistent stopping problems and mixed strategy stopping times

Soeren Christensen, Univ. of Hamburg

A game-theoretic framework for time-inconsistent stopping problems is presented. It turns out that for some classes of such problems, a subgame perfect Nash equilibrium in pure strategies can be found, e.g., for the mean-variance stopping problem or for selling strategy problems under exponential utility and endogenous habit formation. For other problems, such as the variance stopping problem, such equilibria do not exist. Therefore, we introduce and study a concept of mixed strategy equilibria that allows the agents in the game to choose the intensity function of a Cox process.

The expected total reward criterion for Markov decision processes under constraints

Francois Dufour, Université Bordeaux

In this talk, we study discrete-time Markov decision processes (MDPs) under constraints with Borel state and action spaces and where all the performance functions have the same form of the expected total reward (ETR) criterion. One of our objective is to propose a convex program formulation for this type of MDPs. It will be shown that the values of the constrained control problem and the associated convex program coincide and that if there exists an optimal solution to the convex program then there exists a stationary randomized policy which is optimal for the MDP. We consider standard hypotheses such as the so-called continuity-compactness conditions and a Slater-type condition. An example illustrating our results will be presented.

Solvable Stochastic Differential Games with Gauss-Volterra Noise

Tyrone E. Duncan, University of Kansas

Some stochastic differential games in finite and infinite dimensional spaces are formulated and explicitly solved where the stochastic equations are linear with a multiplicative Gauss-Volterra noise and the games have a quadratic payoff.  Gauss-Volterra noise processes are a family of Gaussian processes that include fractional Brownian motions for the Hurst parameter  $H \in(\frac{1}{2},1)$, Liouville fractional Brownian motions for $H \in (\frac{1}{2},1)$ and some  multifractional Brownian motions.  These processes have a long range dependence which is important for use in modeling some physical phenomena.  An explicit Riccati equation is obtained for the optimal strategies that differs from the well-known Riccati equation for a linear-quadratic game.  The time horizons for the payoffs can be finite or infinite.  The infinite time horizon payoff is the integral of a discounted quadratic function whose discounting necessarily has a different rate than the case for Brownian motion.

Zero-sum finite games with risk-sensitive average criterion: average cost limits

Daniel Hernandez, Centro de Investigación en Matemáticas (CIMAT)

In this talk we are concerned with  zero-sum stochastic games  with finite state space and compact action sets. The game is driven by two players and it is assumed that player 1 has a nonull and constant risk sensitivity coefficient, so that a random cost is   assessed via an exponential disutility function.   We shall show that, as the discount factor increases to 1,  an appropriate normalization of the   risk-sensitive discounted value function    converge  to  the risk-sensitive average value function.

Markov decision processes with quasi-hyperbolic discounting

Anna Jaskiewicz, Wroclaw Univ. of Technology

The standard theory in Markov control processes that assumes a use of a constant discount rate contradicts strong empirical evidence that people apply larger discount rates in the short run than in the long run.  Such a behavior exhibits a time inconsistency, a situation in which the preferences of the decision maker may change over time. To circumvent these serious problems associated with time inconsistency, a dynamic game solution is  proposed. In this view,  an individual is modelled as a sequence of autonomous temporal selves playing a dynamic game between one's current self and each one's future selves. We show that for a model on a Borel state space there exists a solution in stationary strategies and under additional assumptions this solution can be replaced by a deterministic stationary strategy.  Moreover, in case of countable state space, there exists a solution in deterministic Markov strategies.

Reflected BSDEs approach to generalized Dynkin games

Tomasz Klimsiak, University of Toruń

We introduce a new class of  reflected backward stochastic differential equations with two c\`adl\`ag barriers, which need not satisfy any separation conditions. For that reason, in general, the solutions are not semimartingales. We prove  existence,  uniqueness and approximation results for solutions of equations defined on general filtered probability spaces. Applications to Dynkin games and variational inequalities, both stationary and evolutionary, will be given.

Filtering and Parameter Estimation for Linear SPDEs driven by Gauss-Volterra Processes

Bohdan Maslowski, Charles University Prague

Kalman-Bucy type filter and some methods of parameter estimation are studied in the case when signals are given as  solutions to linear SPDEs where the noise terms are Gauss-Volterra processes (in particular, fractional Brownian motions). Hilbert space - valued integral equations are derived for the optimal estimate and covariance of the error. Minimum contrast estimators of the  parameter in the drift are derived, shown to be strongly consistent and, under suitable conditions, asymptotically normal.  Furthermore, Berry-Esseen type bounds for the speed of convergence in the total variation and Wasserstein metrics to the normal law are established. The latter result has been proved for equations driven by fractional Brownian motion with Hurst parameter H<3/4. For larger H the asymptotic normality in general does not hold. Maximum likelihood type results are also discussed. The talk is based on a joint papers with Pavel Kriz and Vit Kubelka.

Application of Malliavin calculus  to exact and approximate  option pricing under stochastic volatility

Yuliya  Mishura, University of Kiev

We consider the  models of financial markets with stochastic volatility, which is defined by a functional of Ornstein-Uhlenbeck process or Cox-Ingersoll-Ross process. We study the question of exact price of European option. The form of the density function of the random variable, which expresses the averageof the volatility over time to maturity is established using Malliavin calculus.The result allows calculate the price of the option with respect to minimum martingale measure. Fractional models will be considered as well. Approximations and their rate of convergence are given.

Non linear optimal stopping problem and Reflected BSDE in the predictable setting

 Youssef Ouknine, University Mohammed 6 Polytechnique, Marrakech, Morocco

In the first part of this paper, we study RBSDEs in the case where the filtration is non quasi-left continuous and the lower obstacle is given by a predictable process.  We prove the existene and uniqueness by using some results of optimal stopping theory in the predictable setting, some tools of general theory of processes as the Mertens decomposition of predictable strong supermartingale. In the second part we introduce an optimal stopping problem indexed by predictable stopping times with non linear predictable $g$ expectation induced by an appropriate BSDE. We establish some  useful properties of ${\cal{E}}^{p,g}$-supremartingales.  Moreover, we characterize the predictable value function in terms of the first component of RBSDEs studied in the first part. This is a joint work with Siham Bouhadou.

[1] Bouhadou S., Ouknine Y. (2018): Non linear optimal stopping problem and Reflected BSDE in the predictable setting, arXiv: 1811.00695, submitted.

[2] Bouhadou S., Ouknine Y. (2018):  Optimal stopping problem in predictable general framework, arXiv: 1812.01759, submitted.

[3] El Karoui N. (1978). Arret optimal previsible. Applications to Stochastic Analysis. Lecture Notes in Mathematics, vol {\bf 695}. Springer, Berlin, Heidelberg.

[4] Grigorova, M., Imkeller, P., Offen, E., Ouknine, Y., Quenez, M.-C.  (2016):  Reflected BSDEs when the obstacle is not right-continuous and optimal stopping, \textit{the Annals of Applied Probability},  27(5), 3153-3188.

[5] Grigorova, M., Imkeller, P., Ouknine, Y., Quenez, M.-C. (2016): Optimal stopping with f-expectations: the irregular case, arXiv:1611.09179, submitted.

Value of a Dynkin game with asymmetric information

Jan Palczewski, Univ. of Leeds

We study the value of a zero-sum stopping game in which the terminal payoff function depends on the underlying process and on an additional randomness (with finitely many states) which is known to one player but unknown to the other. Such asymmetry of information arises naturally in insider trading when one of the counterparties knows an announcement before it is publicly released, e.g., central bank's interest rates decision or company earnings/business plans. In the context of game options this splits the pricing problem into the phase before announcement (asymmetric information) and after announcement (full information); the value of the latter exists and forms the terminal payoff of the asymmetric phase.

The above game does not have a value if both players use pure stopping times as the informed player's actions would reveal too much of his excess knowledge. The informed player manages the trade-off between releasing information and stopping optimally employing randomised stopping times. We reformulate the stopping game as a zero-sum game between a stopper (the uninformed player) and a singular controller (the informed player). We prove existence of the value of the latter game for a large class of underlying strong Markov processes including multi-variate diffusions and Feller processes. The main tools are approximations by smooth singular controls and by discrete-time games.

Ergodic Control of a Linear Equation with Rosenblatt Noise

Bozenna Pasik-Duncan, University of Kansas

An ergodic control problem is formulated and explicitly solved for a scalar linear equation with an additive Rosenblatt process noise.  Rosenblatt processes are a family of stochastic processes that are not Gaussian, have continuous sample paths and can be represented as  double stochastic integrals of Brownian motion with singular kernels.  Some stochastic calculus has been developed for these processes which is used to determine an optimal control for an ergodic quadratic cost functional.  Rosenblatt processes have a long range dependence analogous to the family of fractional Brownian motions with the Hurst parameter in $(\frac{1}{2},1)$, but the Rosenblatt processes are not Gaussian.  Non-Gaussian processes have been empirically identified in some physical control systems.

This is joint work with P. Coupek, T. E. Duncan and B. Maslowski.

 

Optimal stopping without Snell envelopes

Teemu Pennanen, King's College London

We study the existence of optimal stopping times via elementary functional analytic arguments. The problem is first relaxed into a convex optimization problem over a closed convex subset of the unit ball of the dual of a Banach space. The existence of optimal solutions then follows from the Banach--Alaoglu compactness theorem and the Krein--Millman theorem on extreme points of convex sets. This approach seems to give the most general existence results known to date. Applying convex duality to the relaxed problem gives a dual problem and optimality conditions in terms of martingales that dominate the reward process.

 

Recent advances about the stochastic gradient Langevin dynamics

Miklos Rasonyi, Renyi Institute HAS

The stochastic gradient Langevin dynamics is a recursive data-based sampling algorithm that is often used for parameter optimization in machine learning applications. Theoretical guarantees have been limited to the case of i.i.d. data so far. Now we present results where only a mixing condition is required about the data sequence. We also present improvements on the rate of convergence.

Based on joint work with Mathias Barkhagen, Ngoc Huy Chau, Eric Moulines, Sotirios Sabanis and Ying Zhang.

ON ERGODIC IMPULSE CONTROL WITH CONSTRAINT, THE LOCALLY COMPACT CASE

Maurice Robin

The impulse control problems for a Markov-Feller process with long-term average ( or ergodic) cost are considered when the controls are allowed only when a signal arrives. This is referred to as control problems with constraint. Previous results by J.L. Menaldi and the author are extended to the situation where the state space of the Markov process is locally compact , with a non-uniform ergodicity condition. The existence of an optimal control based on the exit times of a continuation region is obtained under specific assumptions satisfied by a large class of problems.

Bellman equations for scalar linear convex stochastic control problems

Łukasz Stettner, IMPAN

  A family of discrete time stochastic control problems with linear dynamics and convex cost functionals are studied. For the case of a scalar control for such a model with additive finite time horizon, discounted, and average cost per unit time convex cost functionals as well as multiplicative (exponential) finite time horizon, discounted and long run average convex functionals explicit solutions are described for suitable Bellman equations. In the  particular case of a linear quadratic control problem  a general continuous time problem is described.  The form of the optimal strategies for each of these control problems is characterized. The talk is based on a joint paper with T. Duncan and B. Pasik Duncan.

Coupling distance between Levy measures and uniqueness of viscosity solutions of non-local HJB equations

Andrzej Swiech, Georgia Tech

We will discuss a new approach to the proof of comparison principle for viscosity solutions of non-local Hamilton-Jacobi-Bellman equations. Comparison principle for such equations is still a largely open problem when the Levy measures appearing in the equations depend on the special variable. Our approach is based on the use of an optimal transport based distance between Levy measures and it allows to prove comparison results for a significantly larger class of equations. This is a joint work withNestor Guillen and Chenchen Mou.

Bayesian optimal control for a non-autonomous stochastic discrete time system

Krzysztof Szajowski, Wroclaw Univ. of Technology

The main objective is to develop Bayesian optimal control for a class of non-autonomous linear stochastic discrete time systems with a random horizon of a control. By taking into consideration that the disturbances in the system are given by a random vector with components belonging to an exponential family with a natural parameter, we determine the Bayes control as the solution of a singular linear system. In addition we extend these results to generalized linear stochastic systems of difference equations. This is a joint paper with Ioannis K. Dassios from University of Limerick

Maximum Principle for Switching Diffusions with Applications to Mean-Field Controls

George Yin, Wayne State University

In this talk, we study stochastic maximum principles for switching diffusions. The motivation stems from mean-field games when the systems are given by diffusions with random switching. Specifically, LQG with Markovian switching and mean-field interactions will be treated. This is a joint work with Son Luu Nguyen and Dung Tien Nguyen.

Equilibrium Strategies of Time-Inconsistent Optimal Control Problems

Jiongmin Yong, University of Central Florida

For a continuous-time (stochastic) optimal control problem, if an optimal control selected at some time can stay optimal thereafter, the optimal control is said to be time-consistent. We know that, in reality, such a situation hardly exists. Namely, people more than often will regret the decision that was made previously. Among possibly many others, there are two major causes result in time-inconsistency: (i) People's time-preferences and (ii) People's risk-preferences. In this talk, we will present some recent results we have obtained.

 

Construction and properties of Fractional Cox-Ingersoll-Ross Process driven by fractional Brownian motion with H>1/2

Anton Yurchenko-Tytarenko, Univ. of Kiev

Classical Cox-Ingersoll-Ross process is a widely spread model for interest rate dynamics as well as for stochastic volatility, which, however, does not reflect the "memory phenomenon" on the market. One of possible ways to overcome such problem lies in "replacing" the standard Wiener process by a fractional Brownian motion. In this talk, we present a possible approach to such modification in case of H>1/2 as well as overview several properties of the corresponding process. Two situations are considered, namely zero and strictly positive "mean" parameter values. As an auxiliary result, explicit form of the fractional Ornstein-Uhlenbeck process covariance function is obtained.

Switching Between A Pair of Stocks: An Optimal Trading Rule

Qing Zhang, University of Georgia

This talk is about a stock trading rule involving two stocks. The trader may have a long position in either stock or in cash. She may also switch between them any time. Her objective is to trade over time to maximize an expected return. We reduce the problem to the optimal trading control problem under a geometric Brownian motion model with regime switching. We use a two-state Markov chain to capture the general market modes. In particular, a single market cycle consisting of a bull market followed by a bear market is considered. A fixed percentage cost on each transaction is imposed. We focus on simple threshold-type policies and study all possible combinations and establish algebraic equations to characterize these threshold levels. Sufficient conditions will be provided that guarantee the optimality of these policies. Finally, some numerical examples are provided to illustrate our results.

 

 

 

 

 

Submit abstract

In order to submit your abstract log in using the form below or go to Registration page and create an account with this conference.

Rewrite code from the image

Reload image

Reload image