# Fakultätskolloquium im Wintersemester 2016/17

**Termin:** Das Kolloquium findet in loser Folge mittwochs statt, mit jeweils zwei Vorträgen von 14:30 – 15:30 Uhr und von 16:00 – 17:00 Uhr im Hörsaal 3 (MI 00.06.011).

In der Pause ist für Getränke und Brez'n in der Magistrale gesorgt.

**Ansprechpartner:** Felix Krahmer, Robert König.

Vorträge im vergangenen Semester:

Sommersemester 16.

## Vorträge im Wintersemester 2016/17

#### Steffen Lauritzen — Multivariate total positivity, Simpson's paradox, and the structure of M-matrices

Simpson's paradox is describing the phenomenon that two quantities can be positively associated in a population, but negatively associated within subpopulations. This can imply quite misleading interpretation of statistical information and it is therefore important to identify situations where this cannot happen. When a distribution is multivariate totally positive of order two (MTP2) this cannot happen. The MTP2 property is closed under marginalization, conditioning, and increasing transformations and has a number of other stability properties. In addition, it has fundamental implications for conditional independence relations. A multivariate Gaussian distribution is MTP2 if and only if its covariance matrix is an inverse M-matrix, i.e.\ if all off-diagonal elements of the concentration matrix (inverse covariance matrix) are non-positive. For other types of distributions, other conditions apply.

In the talk I shall give examples of Simpson's paradox, explain the basic properties associated with the MTP2 condition, give examples of MTP2 distributions and characterize these in a number of special cases. I shall also indicate some of the fundamental consequences that this property implies for conditional independence structures.

#### Sabine Le Borne — Hierarchical matrices in scattered data approximation

Scattered data approximation deals with the problem of producing a function \(s\) that in some sense represents some given (typically scattered) data and allows to make predictions at other times/locations/parameter settings. Applications are quite diverse: Surface reconstruction, image compression, numerical solution of PDEs (with their diverse applications), to name just a few.

In a scattered data interpolation problem, the interpolant is typically of the form \( s(x)=\sum_{i=1}^N c_i b_i (x) \) for some given functions \( b_i \). The coefficient vector \( c\in R^N \) of the interpolant may be computed as the solution of a linear system \( Bc=y \) which results from enforcing the interpolation conditions for the given scattered data. While properties of the matrix \( B \) obviously depend on the choice of functions \( b_i \), several of the most commonly used approaches yield highly ill-conditioned, dense matrices \( B \), resulting in a challenge to solve the linear system \( Bc=y \), and hence to solve the scattered data interpolation problem. This talk deals with these challenges and some possible strategies for the solution of this system \( Bc=y \).

In particular, we study the application of techniques from the \({\cal H}\)-matrix framework both for the approximation of the system matrix \(B\) itself as well as for the construction of preconditioners. \({\cal H}\)-matrices provide a data-sparse matrix format that permits storage and matrix arithmetic in complexity \( {\cal O}(N\log^{\alpha} N) \) for moderate \(\alpha\). It turns out that several typical sets of basis functions from the (scattered data) literature lead to matrices \(B\) that fit into this framework, yielding a cost-effective approximation scheme to be illustrated in this talk.

#### Dario Bini — Matrix equations, matrix functions, and infinite Toeplitz matrices

Matrix equations and matrix functions are encountered in the solution
of diverse problems of the Real World. A meaningful example is the
class of quadratic matrix equations of the kind \(AX^2+BX+C=0\) which
play a fundamental role in the analysis of Quasi-Birth--Death
stochastic processes; here, \(A,B\) and \(C\) are given \(n\times n\)
matrices and \(X\) is the matrix unknown. A typical example of matrix
function is given by the matrix exponential
\(\exp(A)=\sum_{i=0}^\infty\frac1{i!}A^i\) which is encountered, for
instance, in the analysis of complex networks and in the solution of
linear differential equations.

In certain cases, say, in the tandem Jackson queue or in bi-dimensional
random walks in the quarter plane, the input matrices have infinite
size and are quasi-Toeplitz, that is, they can be expressed in the
form \(A=T(a)+E\), where \(T(a)=(a_{j-i})_{i,j\in\mathbb Z^+}\) is the
Toeplitz matrix associated with the bi-infinite sequence
\(\{a_k\}_{k\in\mathbb Z}\), while \(E=(e_{i,j})_{i,j\in\mathbb Z^+}\) is
such that \(\sum_{i,j\in\mathbb Z^+}|e_{i,j}|<+\infty\).

In this talk we introduce and examine the problem of solving quadratic
matrix equations and computing matrix functions which model stochastic
processes, and put specific emphasis on the computational issues. We
discuss the case of finite matrices and then we deal with the more
difficult problem where the coefficients are infinite quasi-Toeplitz
matrices.

More specifically, we show that, the set of quasi-Toeplitz matrices
endowed with a suitable norm, is a Banach space and that the subset
formed by matrices \(T(a)+E\), where the sequence \(\{a_k\}_{k\in\mathbb
Z}\) defines an analytic function \(a(z)=\sum_{i\in\mathbb Z}a_iz^i\)
over a suitable domain, is a matrix algebra endowed with a
sub-multiplicative norm. Relying on these properties, we introduce a
general framework to deal with infinite quasi-Toeplitz matrices and
show some effective algorithms for solving matrix equations and
computing matrix functions in a very effective way.

#### Dierk Schleicher — Newton’s method as an interesting dynamical system and as an unexpectedly efficient root finder

We discuss Newton’s method as a root finder for polynomials of large degrees, and as an interesting dynamical system in its own right. There is recent progress on Newton’s method in at least three directions:

1. We present recent experiments on finding all roots of Newton’s method for a number of large polynomials, for degrees exceeding 100 million, on standard laptops with surprising ease and in short time, with observed complexity \(O(d log^2 d)\) (joint with Simon Schmitt and Robin Stoll).
2. We outline theory about the complexity of Newton’s method as a root finder: unlike various other known methods, Newton as a root finder has both good theory and good implementation results in practice (partly joint work with Magnus Aspenberg, Todor Bilarev, Bela Bollobas, and Malte Lackmann).
3. We discuss Newton’s method as a dynamical system: if \(p\) is a polynomial, then the Newton map is a rational map that very naturally “wants to be iterated”. Among all rational maps, Newton’s method has the best understood dynamics, and these dynamical systems can be classified (in the sense of a theory developed by Bill Thurston). As a byproduct, we offer an answer to a question of Steve Smale on existence of attracting cycles of higher period (joint work with Russell Lodge and Yauhen Mikulich).

#### Melvin Leok — Geometric Numerical Integration and Computational Geometric Mechanics

Symmetry, and the study of invariant and equivariant objects, is a deep and unifying principle underlying a variety of mathematical fields. In particular, geometric mechanics is characterized by the application of symmetry and differential geometric techniques to Lagrangian and Hamiltonian mechanics, and geometric integration is concerned with the construction of numerical methods with geometric invariant and equivariant properties. Computational geometric mechanics blends these fields, and uses a self-consistent discretization of geometry and mechanics to systematically construct geometric structure-preserving numerical schemes.

In this talk, we will introduce a systematic method of constructing geometric integrators based on a discrete Hamilton's variational principle. This involves the construction of discrete Lagrangians that approximate Jacobi's solution to the Hamilton-Jacobi equation. We prove that the resulting variational integrator is order-optimal, and when spectral basis elements are used in the Galerkin formulation, one obtains geometrically convergent variational integrators.

We will also introduce the notion of a boundary Lagrangian, which is analogue of Jacobi's solution in the setting of Lagrangian PDEs. This provides the basis for developing a theory of variational error analysis for multisymplectic discretizations of Lagrangian PDEs. Equivariant approximation spaces will play an important role in the construction of geometric integrators that exhibit multimomentum conservation properties, and we will describe a general construction for group-equivariant interpolants on symmetric spaces with applications to Lorentzian metrics.

#### Nick Trefethen — Cubature, approximation, and isotropy in the hypercube

The hypercube is the standard domain for computation in higher
dimensions. We describe two respects in which the anisotropy of this
domain has practical consequences. The first is a matter well known
to experts (and to Chebfun users): the importance of axis-alignment
in low-rank compression of multivariate functions. Rotating a
function by a few degrees in two or more dimensions may change its
numerical rank completely. The second is new. The standard notion
of degree of a multivariate polynomial, total degree, is isotropic
-- invariant under rotation. The hypercube, however, is highly
anisotropic. We present a theorem showing that as a consequence,
the convergence rate of multivariate polynomial approximations in
a hypercube is determined not by the total degree but by the {\em
Euclidean degree,} defined in terms of not the 1-norm but the 2-norm
of the exponent vector \(\bf k\) of a monomial \(x_1^{k_1}\cdots \kern
.8pt x_s^{k_s}\). The consequences, which relate to established ideas
of cubature and approximation going back to James Clark Maxwell, are
exponentially pronounced as the dimension of the hypercube increases.
The talk will include numerical demonstrations.