Speaker: Mario Ullrich
Lecture room: Exactum, C124
at the Department of Mathematics and Statistics, University of Helsinki
In two talks I'll give an overview of some recent and not so recent developments in the area of high-dimensional integration and approximation of functions based on function evaluations.
The emphasis is on information-based complexity, i.e., we ask for the minimal number of information (aka measurements) needed by any algorithm to achieve a prescribed error for all inputs. Hence, upper error bounds are complemented by lower bounds.
In Part 1, I'll present that in many cases, certain (unregularized) least squares methods based on "random" information, like function evaluations, can catch up with arbitrary algorithms based on arbitrary linear information, i.e., the best we can do theoretically.
After a detailed introduction to the field, we will discuss the following:
(1) random data for L_2-approximation in Hilbert spaces,
(2) approximation in other norms for general classes of functions, and
(3) "Does random data contain optimal data?" (Spoiler: The answer is often: Yes!)
In Part 2, the focus is on high-dimensional integration and approximation, and the dependence of the error on the dimension. Here, we mainly discuss the "curse of dimension" for classical (isotropic) spaces C^k on domains, and that the (expectedly ineffective) product rules are indeed optimal in high-dimensions.
I'll mention several open problems in the field.
In both parts, I'll try to introduce all the necessary concepts in detail and therefore think that no expertise is required to follow the talk.
Please follow the link to see the record of our past activities.