Many applications in engineering, science, and statistics require inter- or extrapolation from data and involve systems that contain a huge amount of parameters. Generic examples are computer-based simulations, data mining, or forecasting. Indispensable foundation in every such situation is a mathematical or statistical model. It allows to represent the underlying real-world phenomenon - or at least some simplification thereof - in a way suitable for computation and mathematical analysis. The model formulation very often involves multivariate functions f(x1, ...,xd) where the dimension d may be very large. The problem of inter- and extrapolation then is to find a function which fits the given data in a suitable sense. Problems of this kind are so ubiquitous that several mathematical disciplines are devoted to them. Each uses its own language: In approximation theory and numerical analysis, the terms function identification, function recovery, and function reconstruction are common. Statisticians speak of regression, function estimation or function fitting. To learn a function is also a widely used phrase in machine learning and statistical learning theory.
A special focus in this workshop will be on connections of (classical) approximation theory, asymptotic geometric analysis, and information-based complexity (IBC). The latter two are relatively young and until now essentially independent areas dealing with high-dimensional problems, while the first could be seen as a common ancestor and serves as a source of motivation, mathematical techniques and open problems for both areas. The goal is to explore further the relations between those fields, in particular, to high-dimensional numerical integration, discrepancy theory, and dispersion of point sets in high-dimensions. Very recent results reveal the deep impact of methods from asymptotic geometric analysis in IBC, a connection that is promising, exciting, and shall be further pursued during this workshop.