"The asymptotic convergence rate of the Douglas Rachford iteration for basis pursuit"
Abstract: For large scale nonsmooth convex optimization problems, ﬁrst order methods involving only the subgradients are usually used thanks to their nice scaling to the problem size. Douglas-Rachford (DR) splitting is one of the most popular ﬁrst order methods in practice. It is well-known that DR applied on dual problem is equivalent to the widely used alternating direction method of multipliers (ADMM) and the relatively new split Bregman method. As motivating examples, ﬁrst we will brieﬂy review several famous convex recovery results including compressive sensing, matrix completion and PhaseLift, which represent a successful story of the convex relaxation approach attacking certain NP-hard linear inverse problems in the last decade. Along this line, we will then discuss some preliminary results of our own work on a directional component analysis problem by nuclear norm minimization. When DR is applied to these convex optimization problems, one interesting question of practical use is how the parameters in DR aﬀect the performance. We will show an explicit formula of the sharp asymptotic convergence rate of DR for the simple L1 minimization. The analysis will be verified on examples of processing seismic data in Curvetlet domain. This is a joint work with Laurent Demanet at MIT.
"Infinity on a grid shell: solving hyperbolic equations on unbounded domains"
Abstract: It is commonly stated that hyperbolic equations cannot be solved on unbounded domains without the introduction of an artificial outer boundary. Such a boundary requires the implementation of transparent boundary conditions, introducing the outer boundary problem, and truncates the domain of interest, introducing the radiation extraction problem. I will show that both problems are solved cleanly through a method called hyperboloidal compactification that maps the infinite solution domain to a finite computational domain along with a suitable transformation of time. I will review the history of the method and present a few recent applications.
"Parallel Thinking Across the Sciences: An introduction to computational thinking and modeling"
Abstract: Multi-core computing has emerged as the infrastructure norm of all computing, even for desktops and mobile devices. Nearly every exploration in the social, life, and physical sciences requires the efficient implementation of complex models of increasing size and scale along with the application of massively parallel computing and the analysis of big data. There has been a substantial lag, however, in understanding parallel thinking in problem formulation and solution in such environments. Parallel thinking is still foreign to many computer science faculty, and even more so to faculty of math and science courses who have spent the last half-century devising serial formulae and algorithms, and ultimately overly simplified problems. While serving as an introduction to basic modeling tools for computational thinking, we will also make an effort to start to understand and exploit the parallelism in nature to illuminate the nature of parallelism. We will use modeling tools such as Excel, Vensim, and AgentSheets to explore system and agent models.
"Rewiring robust yeast cellular networks: implications for evolution and human disease"
Abstract: Robustness is the ability of organisms to maintain a phenotype in the face of genetic or environmental perturbations. Though robustness is critical for cell survival, it can pose barriers to adaptation and evolution. In this talk I will use the prism of yeast biology to explore how cells resolve these apparent tradeoffs. Epigenetically-inherited protein aggregates known as prions, can act as semi-stochastic switches that disrupt robustness to enable yeast cells to survive under extreme stress. By combining wetlab data, bioinformatics and a deep-sequencing pipeline, I provide insight into the network rewiring mechanisms that underlie prion-mediated survival. Comparative genomics and population genetics modeling further suggest that these rewiring abilities may be under natural selection. In related work, I have leveraged these pipelines to identify targets of novel antifungal drug candidates that create evolutionary tradeoffs. Ultimately I will extend these studies to the identification of causative mutations in human disease data, using both data-driven and in-silico modeling systems biology approaches.
"Uncertainty Quantification in Computational Models"
Abstract: Models of physical systems generally involve parameters that are determined from empirical measurements, and therefore exhibit a certain degree of uncertainty. Estimating this uncertainty, and its propagation into computational model output predictions is crucial for purposes of model validation, design optimization, assessment of predictability, and decision support. Recent years have seen significant developments in probabilistic methods for efficient uncertainty quantification (UQ) in computational models. These developments are grounded in the use of Polynomial Chaos (PC) expansions for representation of random variables. The utility and effectiveness of PC methods have been demonstrated in a range of physical models. While high-dimensionality remains nominally an ongoing challenge, great strides have been made in dealing with moderate dimensionality along with non-linearity and oscillatory dynamics. In this talk, I will discuss UQ in computational models, focusing on chemical systems. I will cover both the estimation of uncertain input parameters from empirical data, and the forward propagation of parametric uncertainty to model outputs. I will cover the basics of PC UQ methods with examples of their use. I will also highlight the need for accurate estimation of the joint probability density over the uncertain parameters, in order to arrive at meaningful estimates of model output uncertainties. In this context, I will discuss novel methods for estimating this density given partial information from legacy experiments, and in the absence of the original raw data.
"Tractable Bayesian computation for inverse problems"
Abstract: Bayesian inference provides a natural framework for quantifying uncertainty in parameter estimates and model predictions, for combining heterogeneous sources of information, and for conditioning sequential predictions on data. Posterior simulation in Bayesian inference often proceeds via Markov chain Monte Carlo (MCMC), but the associated computational expense and convergence issues can present significant bottlenecks in large-scale problems. This lecture will discuss approximation techniques designed to accelerate Bayesian inference in the context of computationally intensive forward models and high (in principle, infinite)-dimensional parameter spaces.
First, we will present methods for controlled approximation of the forward model or parameter-to-observable map; these include sparse polynomial expansions as well as projection-based reduced order models that are useful when the forward model comprises a large set of differential equations. While it is natural to construct forward model approximations over the support of the prior distribution, more efficient approaches seek accuracy with respect to the posterior distribution, which typically concentrates on a small fraction of the prior support. In this setting, forward model approximations can be constructed in an online fashion, concurrently with MCMC or importance sampling.
Second, we will discuss methods for reducing the parameter dimension of Bayesian inverse problems in order to accelerate posterior sampling. While the posterior distribution may be explicitly high-dimensional, the intrinsic dimensionality of the inference problem is affected by prior information, limited data, and the smoothing properties of the forward operator. Often only a few directions are needed to capture the change from prior to posterior. We describe methods for identifying these directions through the solution of a generalized eigenvalue problem, and extend them to nonlinear problems where the Hessian of the log-likelihood varies over parameter space. Identifying these directions leads to efficient Rao-Blackwellized posterior sampling schemes, or to exact and mesh-independent Metropolis-Hastings formulations.
Numerical demonstrations of these methods will include chemical kinetic modeling, subsurface flow and transport, and other physical applications.
"Geodesic equation on the Universal Teichmüller space; Teichons; and Imaging"
Abstract: In this talk I will describe a way to parametrize a space of planar shapes by members of a coset space $PSL_2(R)\Diff(S^1)$. These functions are also known as welding maps in the Teichmüller theory. The geodesic equation on this space with the Weil-Petersson metric is the Euler-Poincare equation on the group of diffeomorphisms of the circle S^1, or EPDiff(S^1). It admits a class of soliton-like solutions, Teichons. These solutions have momenta as a linear combination of delta functions. The resulting simpler geodesic equation is more tractable from the numerical point of view. Applications of image matching with Teichons on the database of hippocampi will be demonstrated.
"Navier-Stokes equations on rotating surfaces: Regularity; algorithm; analysis; and simulation"
Abstract: Development of efficient algorithms with rigorous analysis for partial differential equations (PDE) on domains and surfaces requires knowledge of the regularity of solutions of the PDE. In this talk, for the Navier-Stokes equations on rotating spheres we discuss (i) the global regularity for real and complexified time; (ii) a high-order algorithm with stability and convergence analysis; (iii) long time simulation of a benchmark atmospheric flow model and justification of a turbulence theory.
"Fast direct solution techniques for elliptic partial differential equations"
Abstract: In many areas of science and engineering, the cost of solving a linear boundary value problem determines what can and cannot be modeled computationally. In some cases, it is possible to recast the problem as an integral equation which sometimes leads to a reduction in the dimensionality of the problem. Alternatively, the differential equation can be discretized directly with a finite element or finite difference method. Either way, one is left with having to solve a large linear system. The computational cost of directly inverting the linear system via Gaussian elimination grows as $O(N^3)$ where $N$ is the size of the system. Due to recent developments (multigrid, FMM, FFT, etc.), there are "fast" methods for most of these linear systems of equations. By "fast", we mean that the computational cost of solving the problem grows as $O(N \log^k N)$ where $k$ is a small integer, normally $k = 0, 1, $ or $2$. Many "fast" schemes are based on iterative techniques that build a sequence of approximate solutions that converges to the exact solution and often require the use of a problem specific preconditioner. In this talk, we will present methods that "directly" invert the system by exploiting structure in the matrix with a cost that grows linearly with the problem size. Such "fast direct methods" are more robust, versatile and stable than iterative schemes. They are also much faster for problems with multiple right-hand sides.
"Fast recovery of far-field signals from gravitational perturbations"
Abstract: Time-domain simulation of blackhole perturbations on a finite computational domain requires the introduction of a fictitious outer boundary. On this surface we seek to i) supply exact radiation boundary conditions and ii) recover the asymptotic (far-field) signal which would reach large distances including infinity. Far-field signals are especially important as they encode information about the physical system. In this talk I show how both radiation boundary conditions and signal recovery are easily handled via a time-domain convolution of the solution recorded at the outer boundary with a small number of damped exponentials. In turn, each exponentials' strength and damping rate arises from a sum-of-poles approximation of a Laplace frequency domain kernel. I will motivate the origin of the frequency domain kernel by analogy to the ordinary 3+1 wave equation as well as the numerical techniques needed for its sum-of-poles approximation. The technique is used to study late-time decay tails at infinity, teleportation of a signal between two finite radial values and signals from binary black hole systems.
"A short introduction to the devising of the HDG methods"
Abstract: The hybridizable discontinuous Galerkin (HDG) methods appeared a few years ago in the framework of steady-state diffusion problems. We show how to devise them, show why they can be efficiently implemented and argue why is it that they are more accurate than any other DG method. Finally, we establish a link between them and the classical mixed methods, the popular continuous Galerkin method and the finite volume methods. Applications to convection-diffusion, nonlinear elasticity and incompressible and compressible fluid flow will be briefly discussed.
"Fourier Continuation algorithms; fast transforms; and techniques for PDE Solution"
Abstract: Partial Differential Equations (PDEs), allow for highly-accurate approximation and convergence in a PDE solver by allowing Fourier series to be applied to non-periodic functions. FFT speed algorithms have been developed and additionally exhibit minimal pollution (i.e. growth in the global error with the scale of the problem) with spectral error decay away from the boundaries and, at minimum, high-order polynomial interpolation based error near the boundaries. Various methods for applying the FC methodology will be discussed along with demonstrations.
"To infinity and beyond! (And back again): A rigorously-defined path integral for supersymmetric quantum mechanics and an easy proof of an index theorem"
Abstract: The path integrals of quantum physics assume the idea of integration extends to allow integrations over the infinite-dimensional space of paths. Heuristic reasoning suggests that, in certain cases, these infinite-dimensional integrals will localize to finite-dimensional subspaces. If correct, this would imply the equivalence of quantities defined in purely finite-dimensional settings. In this talk, I will describe how to make Feynman's time-slicing approach work to give mathematically precise meaning to the path integrals of N=1 supersymmetric quantum mechanics, and show that the heuristic reasoning proves to be correct in this case. The above-mentioned equivalence is the Gauss-Bonnet-Chern theorem. The convergence of the time-slicing approximation, with bounds on the errors, make this a nice test case for numerical estimates in lattice field theories with fermions.
"Towards optimal convergence of a compact scheme for fourth-order differential equations with application to the Navier-Stokes system"
Abstract: We construct a fourth-order compact for the Navier-Stokes system. We analyze in detail the convergence of the scheme for a one-dimensional biharmonic scheme and prove its optimal (fourth-order) convergence. We extend the convergence study to two related time-dependent problems. The fourth-order compact scheme may be also applied, with some extensions, to the pure streamfunction formulation of the Navier-Stokes system. Numerical results are shown for problems with exact solution, which indicate the fourth-order accuracy of our scheme. We also show numerical results for the driven cavity problem.
Abstract: After a brief discussion of the importance of introducing computational science into the undergraduate curriculum, I will provide a number of examples of how computational algorithms can aid in providing physical insight, particularly about abstract concepts that students frequently have difficulty appreciating.