PROGRAMME FOR RESEARCH ACTIVITIES IN 2004-2007

Arbitrage and Stochastic Control (Ł. Stettner, J. Zabczyk)


1. Large markets more ...
2. Equilibria and arbitrage in large markets more ...
3. Stochastic control under uncertainty more ...
4. Markets with transaction costs more ...
5. Information flows in complex systems more ...

Dependent Fluctuations (A. Weron, R. Weron, E. Ferenstein)


6. Long range dependence more ...
7. Non-Gaussian market models more ...

Risk Evaluation (P. Jaworski, W. Ogryczak, A. Weron)


8. Multidimensional analysis - copulas more ...
9. Measures of risk more ...
10. Ruin probability and operational risk more ...
11. Fairness and equity in systems design and operation more ...

Financial Modelling (J. Jakubowski, A. Palczewski, M.Rutkowski)


12. Stochastic volatility models and calibration more ...
13. Term structure models more ...
14. Markets with default risk more ...

Innovative Products (A. Palczewski, A Weron, R. Weron)


15. Evaluation of investments and real options more ...
16. New commodity markets more ...



1. LARGE MARKETS

One of the most striking feature of today's global markets is the diversity of investment possibilities: one can be present in several economies at the same time. This provides unprecedented opportunities for the risk diversification, i.e., for reducing risk via trading in a large number of assets, investing in various enterprises. The mathematical theory of large financial markets furnishes a convenient framework for the study of such situations. In parallel to the approach of a statistician who analyzes large samples using asymptotic distributions for infinitely many observations, large financial markets are conveniently modeled as an infinite sequence of asset price processes. By investigating their "asymptotic" behavior, one hopes to find some general qualitative and quantitative results, which can be subsequently tested empirically. A pioneering research in this area is the paper by Ross (1976), in which the author showed an approximately linear relationship between asset returns and the correlation with the market portfolio, supposing that the large market does not contain arbitrage possibilities (in a specified sense).

In the more recent studies, initiated by Kabanov and Kramkov (1994), the effort is concentrated on the determination of a “right” no-arbitrage concept. On one hand, the analysis of finite markets showed that absence of arbitrage is related to the existence of certain linear pricing functionals, which can be used to price derivative products (martingale measures). On the other hand, no arbitrage may also imply explicit relationships between the market parameters, as the discovery of Ross (1976) proved. In related papers, a general semimartingale setting prevails, that is, an infinite family of continuous-time price processes is investigated. One considers the set of value processes resulting from all possible trading strategies on finite markets and applies the (mainly functional analytic) techniques, which proved to be an efficient tool in the case of finitely many assets. In the next step, one “passes to the limit” in an appropriate manner. It would be definitely desirable to develop a more direct treatment of these value processes, and to integrate this theory into the framework of infinite dimensional stochastic analysis. In Rásonyi (2002), the discrete-time financial models are treated, but the result obtained is not yet fully satisfactory. In a one-period model, there is a striking connection with orthogonal series, which ought to be further exploited. The most natural issue that arises here is the determination of necessary and sufficient conditions for the existence of an equivalent martingale measure in terms of no-arbitrage. It should be stressed that his important and challenging problem remains still open. Another important direction of research is the study of specific relationships between market parameters for judiciously chosen classes of models. It is natural to expect that the parameters of a market in equilibrium must satisfy certain criteria. It is also possible to examine interest rate term structure within the framework of the theory of large financial markets. Then the above mentioned criteria could serve to gauge the potential arbitrage possibilities that appear within the setup of term structure models.

References
[1] Ross, S.A.: The arbitrage theory of asset pricing. J. Econ. Theory 13 (1976), 341-360.
[2] Kabanov, Yu.M., Kramkov, D.O.: Asymptotic arbitrage and contiguity of martingale measure for large financial markets. Probab. Theory Appl. 39 (1994), 222-229.
[3] Rásonyi, M.: Equivalent martingale measures for large financial markets in discrete time. Preprint, 2002.
to the top



2. EQUILIBRIA AND ARBITRAGE IN LARGE MARKETS

The notion of equilibrium has a fundamental importance for the study of large markets. It is well known that the concept of an equilibrium is closely connected with the existence of so-called invariant measures for the system. In the analysis of complex systems, one is typically interested not only in finding the equilibrium state, but also in properties in equilibrium of some functionals defined for states of the system. On the other hand, mathematical finance offers a different point of view on equilibrium. In the financial setup, one deals with a number of observables (prices) and adjoint processes (investment strategies). It appears that a regularity of adjoint processes is related to the existence of invariant measures for observables. This behaviour is called the absence of arbitrage. Hence, the arbitrage property is the mechanism which pushes the system out of equilibrium (Duffie and Huang 1986). This situation is well understood for simple situations like discrete state space and finite time horizon.

For more complicated models, i.e., models with continuum state space and/or infinite time horizon we encounter several difficulties. First, the driving force which pushes the system out of equilibrium has to be defined more precisely. This leads to new notions of a free lunch, a free lunch with vanishing risk (for overview of these problems see Jouini 2001). Second, the set of adjoint processes must be reduced to a more restricted class (Xia and Yan 2001). These problems seem to be well understood for systems which are in a certain sense perfect (or ideal). The real-word systems are imperfect, and their imperfections can be of different type. In most of real-life situations, we do not know how to formulate the “no arbitrage” condition, which would guarantee that the system will achieve an equilibrium state. In addition, it is not known how to evaluate interesting properties of our system (in terms of mathematical finance: it is not known how to evaluate the prices of contingent claims). All these difficulties are encountered in the study of non-Gaussian random systems. Such problems are of great practical importance, as the randomness of a Gaussian type, which can easily be treated mathematically, is far from what we actually observe in real-world complex systems. In many real data sets we observe the effect of scaling, which is clearly different from the scaling property associated with Gaussian models. This proves that the scale-independent formulation for “no-arbitrage” and equilibria for complex systems is required. Also, this leads to a new insight for other important effects which are related to scaling such as: criticality, phase transitions and collective (macroscopic) behaviour of large systems.The following problems will be analyzed:

  • Analysis of non-Gaussian random systems.
  • Relation between equilibria and absence of arbitrage for non-Gaussian models.
  • The existence of fair price in non-Gaussian financial systems.
  • Asymptotic analysis of time evolution of random systems and the evaluation of characteristic functionals.
  • References
    [1] Duffie, D., Huang, C.F.: Multiperiod security market with differential information, J. Math. Econ. 15 (1986), 283-303.
    [2] Jouini, E.: Arbitrage and control problems in finance, J. Math. Econ. 35 (2001), 167-183.
    [3] Xia, J., Yan, J.A.: Clarifying some basic concepts and results in arbitrage pricing theory. Preprint 2001.
    to the top



    3. STOCHASTIC CONTROL UNDER UNCERTAINTY

    A variety of applied control problems consist in maximization of multiobjective functionals. Such multidimensional control problems are hard to solve. Therefore they are usually replaced by one objective (single criterion) problem with a number of constraints, which in turn also is not easy to solve, although one can try to use a kind of Lagrange multiplier approach. A number of control problems considers a maximization of possible income (profit) under a control of risk, which is assumed to be not too large. There is also an independent problem, namely how to measure risk. About risk we have commonly in mind fluctuations of the system above or below the expected value, and variance or semivariance is considered as a measure of risk. Risk sensitive control methodology suggests to include in one cost functional both maximization of the expected values as well as the variance (semivariance), or even the higher moments. For this purpose the cost functional is of the form of logarithm of the expected value of the exponent of the cost multiplied by a parameter called risk sensitivity (risk aversion) parameter, which serves as a weight under which higher moments of the cost are measured in the cost functional. The study of risk sensitive control problems requires to solve quite difficult Bellman equations.

    Another important aspect is uncertainty which appears because of randomness, chaotic behaviour of complex systems and dependence of the coefficients in the system dynamics on exogenous random parameters, called frequently factors. Risk sensitive control problems are closely associated with optimal investment decisions. Risk sensitive dynamic asset management theory introduced by Bielecki and Pliska in [1] considers portfolio optimization in the case when a numbers of macroeconomic and financial factors such as e.g. unemployment rates and market to book ratios are useful to forecast rate of return for assets. Instead of maximizing the expected utility of terminal wealth the objective is to maximize the portfolio’s long run growth rate adjusted by a measure of the portfolio’s average volatility. Although there is number of papers generalizing the model from linear dependencies on the factors (see, e.g., Nagai and Peng [3] or Stettner [9]) and linear dynamics of the factors, the complete approach to the subject, including partial observation of various factors as well as transaction costs (compare to Bielecki and Pliska [2]) seems to be open.

    References
    [1] Bielecki, T. and Pliska, S.: Risk-sensitive dynamic asset management. JAMO 39 (1999), 337-360.
    [2] Bielecki T., Pliska S.: Risk-sensitive asset management with transaction costs. Finance and Stochastics 4 (2000), 1-33,
    [3] Nagai, H. and Peng, S.: Risk sensitive dynamic portfolio optimization with partial observation on infinite time horizon. Ann. Appl. Prob. 12 (2002), 173-195.
    [4] Reiss, M.: Nonparametric estimation for stochastic delay differential equations. PhD thesis. Humboldt University, Berlin, 2001.
    [5] Bank, P. and Foellmer, H.: American options, multi-armed bandits, and optima consumption plans: A unifiying view. Forthcoming, 2003.
    [6] Riedle, M.: Stochastic differential equations with infinite delay. PhD thesis, Humboldt University, Berlin, 2003.
    [7] Stettner, L: Risk sensitive portfolio optimization. Math. Meth. Oper. Res. 50 (1999), 463-474.
    to the top



    4. MARKETS WITH TRANSACTIONS COSTS

    In various kinds of activities both selling or buying strategies are present. If we have large transactions the transactions costs can be neglected. Usually, however, the transactions costs are non-negligible. For a number of transactions we can consider proportional transactions costs, i.e., we pay for transactions a small portion of the transaction value. In the case of large transactions, the transaction costs are usually negotiable and consequently they are smaller than proportional. Mathematically speaking, we can consider them as a concave function of the transaction value. Also, one can notice that in the case of a small transaction we usually have to pay additionally a fix transaction fee. Therefore, the universal model of transaction costs can be produced by using a concave function with a positive value at the origin. Even for the most basic problem of the optimal portfolio management procedure, the introduction of these real-life features leads to serious computational difficulties. The simple piecewise linear implementation of the transaction cost results in the mixed integer structure of the portfolio feasible set. Only initial works have been made on solution techniques applicable for such portfolio optimization models (Jobst et al., 2001; Kellerer et al., 2000; Konno and Wijayanayake, 2001). Hence, a search for efficient solution techniques is still necessary. However, really serious problems appear while studying the corresponding market models. The theory of pricing of various contingent claims is based on the replication of our liability. If a market model is not complete, and unfortunately this happens if transaction costs are taken into account, we have no such possibility. Consequently, we have to consider hedging, and then the seller are buyer prices are different. The pricing is closely related to non-arbitrage, i.e., the non-existence of trading strategies yielding a certain gain without risk. Although as a starting point in pricing it is usually assumed that arbitrage is not allowed, it is not clear what kind of conditions would guarantee the absence of arbitrage. In a recent paper by Kabanov, Rasonyi and Stricker (2002), a characterization of the weak version of arbitrage in the case of proportional transaction costs is provided. The attempts to solve the problem for more general transactions costs seem so far to be unsatisfactory. It should be pointed out here that the classical Black-Scholes model with transaction costs leads us to pricing strategies which appear to be trivial (we have to hedge transaction at the beginning) and thus they are unacceptable from the economic point of view. Another related problem is the utility maximization within the framework of market models with transaction costs. Although for some simple models we can obtain the existence of solutions to suitable Bellman equations, or variational, quasivariational inequalities, a calculation of feasible nearly optimal strategies remains in most cases an open problem.

    References
    [1] Jobst, N.J., Horniman, M.D., Lucas, C.A. and Mitra, G.: Computational aspects of alternative portfolio selection models in the presence of discrete asset choice constraints. Quantitative Finance 1 (2001) 1-13.
    [2] Kellerer, H., Mansini, R. and Speranza, M.G.: Selecting portfolios with fixed costs and minimum transaction lots. Annals of Operations Research 99 (2000) 287-304.
    [3] Konno, H. and Wijayanayake, A.: Portfolio optimization problem under concave transaction costs and minimal transaction unit constraints, Mathematical Programming 89 (2001) 233-250.
    [4] Kabanov, Yu., Rasonyi, M., Stricker, Ch.: No-arbitrage criteria for financial markets with efficient frictions. Finance and Stochastics 6 (2002), 371-382.
    to the top



    5. INFORMATION FLOWS IN COMPLEX SYSTEMS

    The flow of information plays an important role in the managing of complex systems. First, due to the system’s complexity, only a partial information is available. Hence, it is important to learn how to optimize the flow of information within a given complex system. Second, in many complex systems, especially those dealing with human activities, the knowledge about a system is heavily influencing the system itself. Therefore, in the study of a complex system it is vital to identify these features, which are robust in the sense that they do not depend on the information about the system, and these features, which are information driven.

    Another source of external risk may be due to the heterogeneity of information available to different agents acting in a financial market. For example, some traders, which might be called insiders, can profit from privileged information, which is disclosed to the others only at the end of the trading interval, and by which they may even be able to influence the market development. Market models with mostly small agents with different information horizons have been studied by Back (1992, 1993), Pikovsky and Karatzas (1996), and the related work by Amendinger, Imkeller and Schweizer (1998). For market models with agents on different information levels Imkeller, Pontier and Weisz (2001) developed a method based on the stochastic calculus of variations, which allows an explicit description of the information drift by which the additional utility of better informed agents can be expressed, and provides conditions under which the arbitrage opportunities are ruled out. In Corcuera, Imkeller, Kohatsu and Nualart (2002), the problem how the additional information may be blurred to rule out arbitrage is examined. Techniques are embedded in the general semimartingale theory, including the grossissement de filtrations methods, and appropriate extensions involving the concepts of Malliavin's calculus.

    References
    [1] Amendinger, J., Imkeller, P., Schweizer, M. , Additional logarithmic utility of an insider. Stoch. Proc. Appl. 75 (1998), 263-286.
    [2] Back, K.: Insider trading in continuous time. Review of Financial Studies 5 (1992), 387-409.
    [3] Back, K. Asymmetric information and options. Review of Financial Studies 6 (1993), 435-472.
    [4] Corcuera, M., Kohatsu, A., Imkeller, P. and Nualart, D.: Additional utility of insiders with imperfect dynamical information. Preprint, HU Berlin, Univ. Barcelona (2002).
    [5] Imkeller, P. Pontier, M. and Weisz, F. Free lunch and arbitrage possibilities in a financial market model with an insider. Stochastic Proc. Appl. 92 (2001), 103-130.
    [6] Pikovsky, I. and Karatzas, I.: Anticipative portfolio optimization. Adv. Appl. Probab. 2 (1996), 1095-1122.
    to the top



    6. LONG RANGE DEPENDENCE

    Studying the flow of data describing the behaviour of a complex system very often one observes the pattern of self-similarity. Namely the “picture” is independent of the time scale. When the distributional properties of fluctuations are proportional to the square root of the time scale, than it suggest that we are dealing with the white noise type models (that is, a model driven by a standard Brownian motion). In practice, most often we observe only a general power law i.e. the distributional features of fluctuations are proportional to the power |Δt| α where α is some constant scale parameter, different from ½. This indicates that we are dealing with so called long-range dependence (see, for instance, Anderson and Bollerslev (1997), Stageman (2000) or Bardet (2000, 2002)) and the better way of modelling the phenomena is to make use of some alternative specifications of the noise process, for instance, to model the underlying noise as the so-called fractional Brownian motion. A version of stochastic calculus based on the fractional Brownian motion was recently developed by Duncan et al. (2000) and Elliott and van der Hoek (2001), who also apply their methodology to financial models. In particular, they derived option pricing formulas in multiple fractional Brownian Black-Scholes market model. The aim of this workpackage is the close investigation of statistical properties of models based on fractional Brownian motions, and the creation of algorithms which can be implemented.

    References
    [1] Anderson, T. And Bollerslev, T. (1997) Heterogeneous information arrivals and return volatility dynamics: Uncovering the long run in high-frequency data. J. Finance 52 (1997), 975-1006.
    [2] Stegeman, A.: Heavy tails versus long-range dependence in self-similar network traffic. Statist. Neerlandica 54 (2000), 293-314.
    [3] Bardet, J.M.: Testing for the presence of self-similarity of Gaussian time series having stationary increments. J. Time Series Analysis 21 (2000), 497-515.
    [4] Bardet, J.M.: Statistical study of the wavelet analysis of fractional Brownian motion. IEEE Trans. Inform. Theory 48 (2002), 991-999.
    [5] Duncan, T., Hu, Y. and Pasik-Duncan, B.: Stochastic calculus for fractional Brownian motion. Part I: Theory. SIAM J. Control and Optimization 38 (2000), 582-612.
    [6] Elliott, R. and van der Hoek, J.: A general fractional white noise theory and applications to finance. Mathematical Finance 13 (2003), 301-330.
    to the top



    7. NON-GAUSSIAN MARKET MODELS

    Distributional assumptions for financial processes have important theoretical implications, given that decisions are commonly based on expected returns and risk of alternative investment opportunities. Hence, solutions to such problems like portfolio selection, option pricing, and risk management depend crucially on distributional specifications. Many techniques rely heavily on the assumption that the random variables under investigation follow a Gaussian distribution. However, time series observed in real world often deviate from the Gaussian model, in that their marginal distributions are heavy-tailed and asymmetric. In such situations, the appropriateness of the commonly adopted Gaussian assumption is highly questionable. It is often argued that financial asset returns are the cumulative outcome of a vast number of pieces of information and individual decisions arriving almost continuously in time series (see McCulloch [4] or Rachev and Mittnik [6]). Hence, it is natural to consider stable distributions as approximations. The Gaussian law is by far the most well known and analytically tractable stable distribution and for these and practical reasons it has been routinely postulated to govern asset returns. However, financial asset returns are usually much more leptokurtic, i.e., have much heavier tails. This leads to considering the non-Gaussian alpha less than 2 stable laws, as first postulated by Benoit Mandelbrot in the early 1960s (see [3]). Apart from empirical findings, in some cases there are sound theoretical arguments for expecting a non-Gaussian alpha-stable model. For example, emission of particles from a point source of radiation yields the Cauchy distribution, hitting times for Brownian motion yield the Levy distribution, the gravitational field of stars yields the Holtsmark distribution (see Janicki and Weron [2] or Uchaikin and Zolotarev [7]). Stable distributions have been successfully fit to returns in several kinds of financial time series (see McCulloch [4] or Rachev and Mittnik [6]). In recent years, however, several studies have found what appears to be strong evidence against the stable model (cf. McCulloch [5] and Weron [8]). These studies have estimated the tail exponent directly from the tail observations and commonly have found alpha that appears to be significantly greater than 2, well outside the stable domain. It can be shown however (see Borak et al. [1]) that estimating alpha only from the tail observations may be strongly misleading and for samples of typical size the rejection of the alpha-stable regime unfounded. There are various aspects which require justification (besides of modelization itself) in the case of non-Gaussian market models.

    References
    [1] Borak, Sz., Haerdle, W. and Weron, R.: Stable distributions in finance. In: Statistical Case Studies, eds. P. Cizek, W. Härdle, H. Sofyan. Springer, 2003.
    [2] Janicki, A. and Weron, A.: Simulation and Chaotic Behavior of alpha-Stable Stochastic Processes. Marcel Dekker, 1994.
    [3] Mandelbrot, B.B.: Fractals and Scaling in Finance. Springer, 1997.
    [4] McCulloch, J.H.: Financial applications of stable distributions. In: G.S. Maddala, C.R. Rao (eds.), Handbook of Statistics, Vol. 14 (1996), Elsevier, pp. 393-425.
    [5] McCulloch, J.H.: Measuring tail thickness to estimate the stable index alpha: A critique. Journal of Business & Economic Statistics 15 (1997) 74-81.
    [6] Rachev, S. and Mittnik, S.: Stable Paretian Models in Finance, Wiley 2000.
    [7] Uchaikin, V.V. and Zolotarev, V.M.: Chance and Stability: Stable Distributions and their Applications. VSP, 1999.
    [8] Weron, R.: Levy-stable distributions revisited: Tail index > 2 does not exclude the Levy-stable regime. International Journal of Modern Physics C 12 (2001): 209-223.
    to the top



    8. MULTIDIMENSIONAL ANALYSIS - COPULAS

    The concept of a copula appears to be a very useful tool in the investigation of multidimensional distributions. By definition, a copula is a function that links univariate marginals with their multivariate distribution. It allows to study dependence between single factors. For example, it is used in finding a proper distribution which described some stylized facts. Copulas are extremely useful in applications, if we want to find a multidimensional distribution describing some phenomena. At first, we find the marginal distribution, which is much more simple than finding multidimensional distributions. Secondly, we choose the appropriate copula in order to represent the observed dependencies between factors in a suitable manner. In the language of copulas, we can describe and analyse several previously proposed measures of concordance, including: the Kendall’s tau, the Spearman’s rho and many measures of dependence for example positive quadrant dependent. Copulas are also very useful for Monte Carlo methods in finance. Finally, they are used in finding the so-called Value at Risk. In this framework, the objective of the research will be the extension of copula-based techniques to Value at Risk associated with continuous-time models with jumps.

    Recently, there has been a growing interest in modelling default dependencies in intensity-based models with the use of copulas. Our investigation will be concentrated on using copulas to compute aggregated losses and to modelling default within the framework of mutually dependent events. The study of this particular applications of copulas will be coupled with the general study of systems with default risk.

    References
    [1] Nelsen, R.N.: An Introduction to Copulas. Lecture Notes in Statistics 139, Springer-Verlag, Berlin,1999.
    [2] Coutant, S., Durrleman, V., Rapuch, G. and Roncalli, T.: Copulas, multivariate risk-neutral distributions and implied dependence functions. Preprint, Credit Lyonnais, 2001.
    [3] Durrleman, V., Nikeghbali, A., Riboulet, G. and Roncalli, T.: Copulas for finance. A reading guide and some applications. Working paper, Credit Lyonnais, 2000.
    [4] Embrechts, P., McNeil, A. and Strautman, D.: Correlation and dependence in risk management. Preprint, ETH Zurich, 1999.
    [5] Schoenbucher, P. and Schubert, D.: Copula-dependent default risk in intensity models. Working paper, 2001.
    to the top



    9. MEASURES OF RISK

    The most popular practical measure of (financial) risk is the value at risk (VaR). VaR has a lot of advantages. It is a quantitative and synthetic measure of risk. VaR allows to take into consideration various kinds of cross-dependence, for instance, between assets returns and fat-tail distributions. But the most important is the regulatory cause – it is an obligatory tool for risk management of financial institution. JP Morgan Riskmetrics documentation is based on the assumption of joint normality of factor returns, however. This generally leads to VaR underestimation. The main focus is based on using VaR to conduct the marginal analysis of portfolios, or to compute optimal in a sense portfolios under VaR constraints. It should be acknowledged that VaR presents also some disadvantages. Therefore, a study of different measures of risk is needed. VaR is a simple (first order) quantile risk measure; this feature results in its limitations for risk aversion modelling. Recently, the second order quantile risk measures have been introduced in different ways by many authors. The measure, usually called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. It leads to LP solvable portfolio optimization models in the case of discrete random variables represented by their realizations under the specified scenarios. The CVaR has been shown (Pflug, 2000) to satisfy the requirements of the so-called coherent risk measures (Artzner et al., 1999) and it is consistent with the second degree stochastic dominance (Ogryczak and Ruszczynski, 2002). Several empirical analyses (Anderson et al., 2001; Rockafellar and Uryasev, 2002; Mansini et al., 2002) confirm its applicability to various financial optimization problems. Although any CVaR measure is risk relevant, it represents only the mean within a part of the distribution of returns. Therefore, such a single measure is in some manner crude as for modelling various risk aversion preferences. In order to enrich the modelling capabilities one needs to treat differently some more or less extreme events. For this purpose one may consider multiple CVaR measures as criteria to open an opportunity for multiple criteria modelling of risk aversion preferences (Ogryczak, 2002).

    References
    [1] Artzner, P., Delbaen, F., Eber, J.M. and Heath, D. (1999) Coherent measure of risk. Mathematical Finance 4 (1999), 203-228.
    [2] Fritelli, M. and Gianin, E.R.: Putting order in risk measures. Journal of Banking and Finance 26 (2002), 1473-1486.
    [3] Cvitanić, J. and Karatzas, I.: On dynamic measures of risk. Working paper, 1998.
    [4] Duffie, D. and Pan, J.: An overview of value at risk. Journal of Derivatives 4, Spring (1997), 7-48.
    [5] Andersson, F., Mausser, H., Rosen, D. and Uryasev, S.: Credit risk optimization with Conditional Value-at-Risk criterion. Mathematical Programming 89 (2001) 273-291.
    [6] Mansini, R., Ogryczak, W.and Speranza, M.G.: LP solvable models for portfolio optimization:A classification and computational comparison. Technical Report 2002.
    [7] Ogryczak, W.: Multiple criteria optimization and decisions under risk. Control and Cybernetics 31 (2002) 975-1003.
    [8] Ogryczak, W. and A. Ruszczynski, A.: Dual stochastic dominance and related mean-risk models. SIAM Journal on Optimization 13 (2002) 60-78.
    [9] Pflug, G.Ch.: Some remarks on the Value-at-Risk and the Conditional Value-at-Risk. Probabilistic Constrained Optimization: Methodology and Applications, Kluwer A.P., Dordrecht, 2000.
    [10] Rockafellar, R. and Uryasev, S.: Conditional Value-at-Risk for general distributions. Journal of Banking and Finance 26 (2002) 1443-1471.
    to the top



    10. RUIN PROBABILITY AND OPERATIONAL RISK

    The recent increasing interplay between actuarial and financial mathematics has led to a surge of risk theoretic modeling. Especially actuarial ruin probabilities under fairly general conditions on the underlying risk process have become a focus of attention. We propose self-similar processes for the renewal model in risk theory and we will construct a risk model with a mechanism of long range dependence of claims. Next, we compare various approximations of ruin probability in order to find the best method for typical light- and heavy-tailed claim size distributions.

    In the wake of quantitative modeling of market risk for financial (typically banking) institutions through the Basel Committee on Banking Supervision the quantitative modeling of operational losses has become a key consideration. Typical examples include losses resulting from system failure, fraud, litigation, and handling of transactions. We propose to study refined insurance-type risk processes. These models will in particular have to cater for stochastic intensities driven by exogenous economic factors and at the same time allow for heavy-tailed claim amounts. A new methodology developed in this work will have a wider range of applications within quantitative risk management for the financial (including insurance) industry.

    In a very interesting work of Grandell [2] it is demonstrated that between possible simple approximations of ruin probabilities in infinite time the most successful is the De Vylder approximation, which is based on the idea to replace the risk process with the one with exponentially distributed claims and ensuring that the first three moments coincide. We plan to modify the De Vylder approximation by changing the exponential to gamma distribution. We already demonstrated that this modification is promising and works in many cases even better than the original method. Also we will drop the assumption simple and plan to show that the approximation based on the Pollaczeck-Khinchine formula gives the best results. This will permit to include typical light- and heavy-tailed claim size distributions, namely exponential, mixture of exponentials, gamma, lognormal, Weibull, log-gamma, Pareto and Burr.

    References
    [1] K. Burnecki, A. Marciniuk, A. Weron, Annuities under random rates of interest – revisited. Insurance, Math. & Econ. 30 (2003) 1-4.
    [2] J. Grandel, Simple approximations of ruin probabilities. Insurance, Math. & Econ. 26 (2000) 157- 173.
    [3] P. Embrechts, R. Kaufmann, G. Samorodnitsky, Ruin theory revisited: stochastic models for operational risk, ETH Zurich, Technical Report, 2002.
    [4] T. Mikosh, G. Samorodnitsky, The supremum of negative drift random walk with dependent heavy- tailed steps. Ann. Appl. Probab. 10 (2000) 1025-1064.
    [5] H. Furrer, Z. Michna, A. Weron, Stable Levy approximation in collective risk theory. Insurance, Math. & Econ. 20 (1997) 97-114.
    to the top



    11. FAIRNESS AND EQUITY IN SYSTEMS DESIGN

    A typical system that serves many users (like telecommunication systems, for instance), faces the problem of allocation of limited resources among competing activities (users, clients) so as to achieve the best overall performance. We focus on approaches which, while allocating resources, attempt to provide an equal (fair) treatment of all the competing activities (Luss, 1999). The problems of efficient and equitable (fair) resource allocation arising in various systems. Telecommunication networks are expected to satisfy the increasing demand for traditional services as well as to accommodate multimedia services. Hence, it becomes critical to allocate network resources so as to provide high level performance of all services at numerous destination nodes.

    Models with an (aggregated) objective function that maximizes the mean (or simply the sum) of individual performances are widely used to formulate resource allocation problems.This solution concept is primarily concerned with the overall system efficiency. As based on averaging, it often provides solution where low demand services are discriminated. An alternative approach depends on the so-called maximin solution concept, where the worst performance is maximized. The maximin approach is consistent with Rawls (1971) theory of justice, especially when additionally regularized with the lexicographic order (Ogryczak, 2000). On the other hand, allocating the resources to optimize the worst performances may cause a large worsening of the overall (mean) performances.

    The result of an allocation decision represents a distribution of individual performances (clients satisfaction) and the equity (fairness) axioms correspond to those of risk aversion in stochastic models (Kostreva and Ogryczak, 1999). It turns out that the risk minimization constructions can easily be adapted to equitable optimization. In particular, the conditional mean, based on averaging restricted to the group of the worst performances defined by the tolerance level, corresponds to the conditional value-at-risk. Our earlier computational experiments with the conditional mean criterion applied to a traffic engineering model (a single ring bidirectional loading) were very promising. Further research of the risk-based models and related techniques for equitable resource allocation seems to be a very promising direction. This research area will focus on possible extensions of the conditional mean approach to allow for more precise preference modelling while preserving its computational effectiveness.

    References
    [1] Kostreva, M.M. and Ogryczak, W.: Linear optimization with multiple equitable criteria. RAIRO Oper. Res. 33 (1999) 275-297.
    [2] Luss, H.: On equitable resource allocation problems: a lexicographic minimax approach. Operations Research 47 (1999) 361-378.
    [3] Ogryczak, W.: Inequality measures and equitable approaches to location problems. European J. Operations Research 122 (2000) 374-391.
    [4] Rawls, J.: The Theory of Justice. Cambridge: Harvard University Press, 1971.
    to the top



    12. STOCHASTIC VOLATILITY MODELS AND CALIBRATION

    A standard approach to the volatility risk in the classic Black-Scholes framework is based on the partial derivatives with respect to the volatility parameter. This simplistic approach could be seen as the first-order method which allows to hedge against the small (random) fluctuations of the model's parameters. In the Black-Scholes model, the valuation of exotic products assumes a constant volatility coefficient. It can be identified with the implied volatility of traded options (the volatility smile is thus ignored). Since the implied volatility varies over time, the marking-to-market of a book of exotic products is essential. Hedging is based on sensitivities, that is, the partial derivatives with respect to all relevant quantities.

    According to the stochastic local volatility approach, a dynamical model is ``deduced'' from today's implied volatility surface. The implied time- and state-dependent coefficient is called the implied volatility function (IVF) or the stochastic local volatility. The model is complete in the usual sense: all contingent claims can be replicated with the use of today's market information and the future behaviour of the underlying asset, as predicted by the model. As observed by Dupire (1994), given the cross-section of market prices of liquid standard (call or put) options, one may uniquely determine the implied volatility function. It is important to observe that using this method we may only recover the marginal laws of the underlying stochastic process. Hence, the valuation of exotic products is not viable within this framework, and the issue of the volatility risk is not addressed adequately. A commonly standard practical ad hoc fix is the continual re-calibration of the model. Such a procedure clearly invalidates the model's assumption, however.

    In an alternative approach to volatility risk, a stochastic spot volatility is modelled as an autonomous stochastic process. It is aimed to represent an autonomous volatility risk, which disappears when the volatility coefficient is constant. Usually, both the underlying asset and the stochastic spot volatility are assumed to follow (possibly correlated) diffusion processes. Let us stress that the direct connection to the observed implied volatilities is lost in this approach. As in the previous case, a continual re-calibration is required in order to match the market prices of options. Finally, the choice of a particular form of a stochastic spot volatility model is based on vague considerations, rather than on well-established arguments. In a recently developed market-based approach to volatility risk, liquid options are taken as primary assets. As observed by Brace et al. (2002), by modelling directly the dynamics of the stochastic implied volatility surface, it is possible to achieve the perfect fit to the future prices of a large class of derivative assets. It was also conjectured that the specification of the volatility of the stochastic implied volatility uniquely determines the stochastic process which specifies the dynamics of the underlying asset. A challenging issue of developing a theory of model’s risk arises in the context of the volatility risk.

    References
    [1] Carr, P., Geman, H., Madan, D. and Yor, M.: Stochastic volatility for Levy processes. Working paper, University of Maryland, 2001.
    [2] Derman, E., Kani, I. and Zou, J.Z.: The local volatility surface: unlocking the information in index option prices. Financial Analysts Journal 52/4 (1996), 25-36.
    [3] Dupire, B.: Pricing with a smile. Risk 7/1 (1994), 18-20.
    to the top



    13. TERM STRUCTURE MODELS

    The last decade was marked by a rapidly growing interest in the arbitrage-free modelling of bond market. Undoubtedly, one of the major achievements in this area was a new approach to the term structure modelling proposed by Heath, Jarrow and Morton (1992), commonly known as the HJM methodology. Since the HJM approach to the term structure modelling is based on an arbitrage-free dynamics of the instantaneous continuously compounded forward rates, it requires a certain degree of smoothness with respect to the tenor of the bond prices and their volatilities. An alternative construction of an arbitrage-free family of bond prices, making no reference to the instantaneous rates, is in some circumstances more suitable. It is common to refer to this approach as the market model of interest rates. This approach was developed in ground-breaking papers by Miltersen et al. (1997) and Brace et al. (1997), who proposed to model instead the family of forward Libor rates. The main goal was to produce an arbitrage-free term structure model which would support the common practice of pricing such interest-rate derivatives as caps and swaptions through a suitable version of Black's formula. This practical requirement enforces the lognormality of the forward Libor (or swap) rate under the corresponding forward (swap) martingale measure. The next step was the backward induction approach to the modelling of forward Libor and swap rate developed in Musiela and Rutkowski (1997) and Jamshidian (1997). A similar, but not identical, approach to the modelling of market rates was developed by Hunt et al. (2000). Since special emphasis is put here on the existence of the underlying low-dimensional Markov process that governs directly the dynamics of interest rates, this alternative approach is termed the Markov-functional approach. This specific feature of the approach advocated by Hunt, Kennedy and Pelsser leads to a considerable simplification in numerical procedures associated with the model's implementation. Another example of a tractable term structure model is the rational lognormal model proposed by Flesaker and Hughston (1996a, 1996b). The general methodology, due to Flesaker and Hughston and known as the pricing kernel approach, was later carried over and further developed in a series of papers by Brody and Hughston (2003). The advantage of this methodology is that, in some sense, one is working closer to the solution set of a stochastic system. Brody and Hughston (2003) have developed an axiomatic formulation of the general arbitrage-free stochastic dynamics of a system of financial assets. The goal of the further research will be to deepen the understanding of the arbitrage-free modelling of the term structure within the framework of the analysis of finite- and infinite-dimensional stochastic systems.

    References
    [1] Brace, A., Gątarek, D. and Musiela, M.: The market model of interest rate dynamics. Mathematical Finance 7 (1997), 127-154.
    [2] Brody, D.C. and Hughston, L.P.: Chaos and coherence: A new framework for interest rate modelling. Working paper, Imperial College and King's College London, 2003.
    [3] Hunt, P.J. and Kennedy, J.E., Pelsser, A.: Markov-functional interest rate models. Finance and Stochastics 4 (2000), 391-408.
    [4] Flesaker, B. and Hughston, L.: Dynamic models of yield curve evolution. In: Mathematics of Derivative Securities. M.A.H.Dempster, S.R.Pliska, eds. Cambridge University Press, Cambridge, 1997, pp.294-314.
    [5] Jamshidian, F.: LIBOR and swap market models and measures. Finance and Stochastics 1 (1997), 293-330.
    [6] Musiela, M. and Rutkowski, M.: Continuous-time term structure models: forward measure approach. Finance and Stochastics 1 (1997), 361-391.
    to the top



    14. MARKETS WITH DEFAULT RISK

    Existing models of default risk fall into the two broad categories: the structural models and the reduced-form models, also known as the intensity-based models. Within the framework of the structural approach, the default time is defined as the first crossing time of some stochastic process through a default triggering barrier. Consequently, the main issue is the joint modelling of the reference stochastic process and the barrier process. Since the default time is defined in terms of the model's primitives, it is common to state that it is given endogenously within the model. In the reduced-form modelling, much attention is paid to characterisation of random times in terms of hazard functions, hazard processes, and martingale hazard processes, as well as to evaluating relevant (conditional) probabilities and expectations in terms of these functions and processes. Important modelling aspects include: the choice of the underlying probability measure (real-world or risk-neutral, depending on the particular application), the goal of modelling (risk management or valuation of derivatives), and the source of intensities. For more details, see Bielecki and Rutkowski [3]-[4], Duffie et al. [6], or Jeanblanc and Rutkowski [7].

    Typically, the firm’s value is not modelled in the reduced-form approach; the specification of intensities is based either on the model's calibration to market data or on the estimation based on historical observations. It is worth noting that in the reduced-form approach the default time is not a predictable stopping time with respect to the underlying information flow. In contrast to the structural approach, the reduced-form methodology thus allows for an element of surprise, which is in this context a practically appealing feature. However, in the so-called hybrid approach, some stochastic processes representing the economic fundamentals are used to model the hazard rate of default, so that they are used indirectly to define the default time, and thus also the default risk. The important and challenging problem of hedging defaultable claims is not yet completely solved. In particular, no well-established theory is known for the modelling of correlated default risk in a dynamic way. Common features in most existing approaches to hedging of default risk are the following assumptions (cf. Blanchet-Scalliet and Jeanblanc [1], Belanger et al. [2], Collin-Dufresne and Hugonnier [5]).

    References
    [1] C. Blanchet-Scalliet and M. Jeanblanc (2001): Hazard rate for credit risk and hedging defaultable contingent claims. Forthcoming in Finance and Stochastics.
    [2] Belanger, A., Shreve, S.E. and Wong, D.: A unified model for credit derivatives. Working paper, 2001.
    [3] Bielecki, T. and Rutkowski, M.: Credit Risk: Modelling, Valuation and Hedging. Springer-Verlag, Berlin Heidelberg New York, 2002.
    [4] Bielecki, T. and Rutkowski, M.: Multiple ratings model of defaultable term structure. Mathematical Finance 10 (2000), 125-139.
    [5] Collin-Dufresne, P. and Hugonnier, J.-N.: On the pricing and hedging of contingent claims in the presence of extraneous risks. Working paper, Carnegie Mellon University, 1999.
    [6] Duffie, D., Schroder, M., Skiadas, C. : A term structure model with preferences for the timing of resolution of uncertainty. Economic Theory 9 (1997), 3-22.
    [7] Jeanblanc, M. and Rutkowski, M.: Default risk and hazard processes. In: Mathematical Finance - Bachelier Congress 2000. H. Geman et al., eds., Springer-Verlag, Berlin, 2002, pp. 281-312.
    to the top



    15. EVALUATION OF INVESTMENTS AND REAL OPTIONS

    Real option approach is an innovative approach to investment valuation. By incorporating managerial decision strategies into appropriate models, this new approach aims at correcting a well documented discrepancy between the observed and the theoretical prices. The motivation for our research is to develop a general unifying approach, encompassing both quantitative and qualitative framework. From the abstract point of view, any investment process consists of a set of managerial decisions and an uncertain flow of profits and losses associated with each decision strategy. The number of factors which influence the outcome of a given decision strategy is very large, their effects are not well recognized and in addition is subject to random fluctuations. In dealing with such a huge set of elements the decision process should adopt a complex system approach, i.e. looking for the qualitative characteristics of the whole set of factors and not to analyse each factor separately. The existing theory appears to be rather fragmented. It does not offer an unifying view on "real options approach" in terms of large systems methodology (Vollert 2003). Our aim is to give a description of this model in terms of macroscopic characteristics of the whole set of factors.

    In the modelling of investment processes, it is frequently assumed that the random cash flows, which result from the managerial decisions, are perfectly correlated with market prices of a finite number of actively traded financial assets and/or commodities. This corresponds to the market completeness assumption, that is often made in financial mathematics. This assumption is reasonably well fulfilled only for investments associated with commodities for which there exists a liquid market. In our research, we shall focus on the situation where there is less market liquidity. This requires a completely innovative approach.

    In real world investment opportunities are subject of uncertainty. There are various sources of this uncertainty: a large number of factors which influence the outcome of the decision strategy, their permanent (frequently instantaneous) fluctuations and random interactions, as well as the influence of external objective forces (Schwartz and Moon 2000). Uncertainty often comes also from the external environment, which exhibits a random behaviour that is hard to predict and control. Consequently, decisions made under uncertainty are subject to risk. In evaluating investment opportunity we require an approach adapted from financial market which is known as risk-neutral valuation, i.e. eliminating the influence of risk on valuation process (Louberge at al. 2002). In addition most investment projects involve (often hidden) options such as: option to abandon the investment, option to expand the investment in favourable conditions, option to reduce the scale of the project and many others (Gauthier 2002).

    References
    [1] Gauthier, L.: Hedging entry and exit decision, J. App. Math. Decis. Sci. 6 (2002), 51-70.
    [2] Louberge, H., Villeneuve, S. and Chesney, M.: Long-term risk management of nuclear waste: a real option approach, J. Econom. Dynam. Control 27 (2002), 157-180.
    [3] Schwartz, E.S., Moon, M.: Rational pricing of internet companies, Fin. Anal. J. 2000, 62-75.
    [4] Vollert, A.: A Stochastic Control Framework for Real Options in Strategic Evaluation, Birkhauser, Boston 2003.
    to the top



    16. NEW COMMODITY MARKETS

    In recent years, we observe the appearance of a new category of commodities on competitive markets. They are products that for years have been traded for regulated prices. Typical example are energy products. As this sector undergoes subsequent deregulation, it requires also new tools and techniques of management in the competitive utility environment. A special role of these commodities is due to their trade characteristics: their storage is very costly (natural gas) or almost impossible (electricity) and there are subject to two different sources of risk – price risk and volume risk. Following Norway, Sweden, Great Britain and Spain, other European countries are increasingly recognizing the benefits of open trading in energy. The energy derivatives market began with crude oil futures and swaps, which are known for many years, then followed natural gas swaps, futures and swing options and recently electricity derivatives entered the market (Heren 2000). Our research will be focused on effective risk management in the global deregulated energy market. An effective analysis of this sector will require a new paradigm for valuation and hedging of relevant instruments different from existing risk-neutral approach. The essential difficulty is connected with the estimate of market risk. We are planning two approaches to energy market. First, by the analysis of large sets of market data, we shall estimate and eliminate systematic risk which give rise to standard risk-neutral theory (Nagornii and Dozeman 2001). Second, we are going to implement a large system approach: construct microscopic models for this particular sector and derive new valuation methodology. The effective competition on this market will result in lower prices for end users and increased competitiveness for European companies versus their US counterparts. Various formal methods were introduced for strategic decisions related to the entire power system. Methods of stochastic optimization and multiple criteria programming were applied to power generation planning with both supply-side and demand-side management. Especially, decisions related to the environmental impact of the power production were analyzed with the use of multiple criteria optimization and related decision support techniques (Hobbs and Meier, 2000). On the other hand, the single generator decisions are hard to formal modelling and effective optimization (Fosso et al., 1999; Hobbs et al., 2001). High volatility of energy market forces participants to be able to make tactical and operational decisions repeatedly in a short time under risk. There is a strong need for a decision support system (DSS) dedicated to electricity market. Mathematical models and formal methods are necessary to achieve satisfactory results while designing models and algorithms for a DSS to be used by a single generator. Within this research area we will focus on the development and analysis of the stochastic short-term planning model can be effectively used as a key analytical tool within the decision support process.

    References
    [1] Fosso, O.B., Gjelsvik, A., Haugstad, A. and Wangensteen, M.B.: Generation scheduling in a deregulated system. IEEE Trans. on Power Systems 14 (1999) , 75-80.
    [2] Hobbs, B.F. and Meier, P.: Energy Decisions and the Environment: A Guide to the Use of Multicriteria Methods. Kluwer, Dordrecht, 2000.
    [3] Hobbs, B.F., Rothkopf, M.F., O'Neill, R.P. and Chao, H.P. (eds.): The Next Generation of Electric Power Unit Commitment Models. Kluwer, Dordrecht, 2001.
    [4] HEREN Report, European Daily Electricity Market, London 2000.
    [5] Nagornii, S. and Dozeman, G.: Liquidity risk in energy market. In Mathematical Finance, M. Kohlmann and S. Tang (Eds.), Birkhaeuser, Basel 2001, pp. 271-282.
    to the top