Center seminars are organized by the Mathematics Department and the College of Engineering. The Center gratefully acknowledges support from the UMassD Office of the Provost.

"Order Reduction in Runge-Kutta Time-Stepping for Initial Boundary Value Problems"

Abstract: When advancing a time-dependent PDE forward via Runge-Kutta methods, the numerical solution may exhibit a convergence order that is less than the actual order of the scheme. We demonstrate that this order reduction phenomenon is in fact the norm, rather than the exception; in fact, it can be observed in very simple problems. Geometrically, order reduction stems from boundary layers, produced by the fact that stage values are too accurate near the boundary. A modal analysis reveals under which circumstances boundary layers persist over many time steps. We then present two remedies to order reduction: (a) a property of Runge-Kutta schemes, called weak stage order, which is compatible with DIRK schemes; and (b) modified boundary conditions. Both methodologies make the numerical boundary layers as accurate as the order of the scheme itself.

"Scientific Computing Using Consumer Video-Gaming Hardware Devices"

Abstract: Commodity video-gaming hardware (consoles, graphics cards, tablets, etc.) performance has been advancing at a rapid pace owing to strong consumer demand and stiff market competition. Gaming hardware devices are currently amongst the most powerful and cost-effective computational technologies available in quantity. In this article, we evaluate a sample of current generation video-gaming hardware devices for scientific computing and compare their performance with specialized supercomputing general purpose graphics processing units (GPGPUs). We use the OpenCL SHOC benchmark suite, which is a measure of the performance of computer hardware on various different scientific application kernels, and also a popular public distributed computing application, Einstein@Home in the field of gravitational physics for the purposes of this evaluation. Full research paper available at: https://arxiv.org/abs/1607.05537

Abstract: In 1989 Robert Bartnik proposed a definition of quasilocal mass in general relativity. The Bartnik mass is known to enjoy several attractive properties but is not straightforward to evaluate. I will talk about a first-order estimate for a natural modification of Bartnik's definition applied to small perturbations of spheres in Euclidean space. In particular I will describe an application to the small-sphere limit in arbitrary time-symmetric slices.

Department of Mathematics and Statistics, University of New Mexico

2:00 PM

Textiles 105

"Sparse, multidomain, modal spectral methods for neutron stars"

Abstract: Spectral methods are often characterized as either nodal (collocation or pseudospectral, with point values as unknowns) or modal (with expansion coefficients as unknowns). We present a sparse, multidomain, modal, spectral method for neutron star binaries (systems of two gravitationally interacting clumps of fluid). In particular, we focus on the modal treatment of non-spherical stellar surfaces. Two issues arise. The first is loss of regularity in the stellar density at the surface. The second, pertinent for the numerical construction of stars via iterative solvers, is convergence breakdown as a stellar surface distorts. We address both issues (although the first only partially). The talk will focus on Newtonian binaries, equilibrium configurations of the Euler equations coupled to Newtonian gravity (thereby avoiding any mention of relativistic physics"). Moreover

Department of Mathematical Sciences, George Mason University

1:00 PM

Textiles 105

"Optimal control of fractional order PDEs"

Abstract: IDiffusion is the tendency of a substance to evenly spread into an available space, and is one of the most common physical processes. The classical models of diffusion lead to well-known equations. However, in recent times, it has become evident that many of the assumptions involved in these models are not always satisfactory or not even realistic at all. Consequently, different models of diffusion have been proposed, fractional diffusion being one of them. The latter has received a great deal of attention recently, mostly fueled by applications in diverse areas such as finance, turbulence and quasi-geostrophic flow models, image processing, peridynamics, biophysics, and many others.

This talk will serve as an introduction to fractional diffusion equation - fractional derivative in both space and time. A novel PDE result by Caffarelli and Silvestre '07 has led to innovative schemes to realize the fractional order operators. We will discuss these numerical methods and their application to PDE constrained optimization problems.

Department of Biostatistics and Center for Statistical Sciences, Brown University School of Public Health

3:00 PM

Textiles 105

"Risk Prediction Modeling in the National Lung Screening Trial"

Abstract: In the National Lung Screening Trial (NLST) a 20% relative reduction in lung cancer mortality was observed using reduced dose helical computed tomography (CT) relative to chest-x-ray screening in older smokers. This sub-study aims to determine how the observed nodules and the associated features can influence lung cancer diagnosis. Methods: In 26,455 participants who underwent at least one CT screen, sensitivity, specificity, positive predictive value, and negative predictor value for lung cancer were determined separately for different types of nodules. Relative risk of lung cancer was determined as the ratio of lung cancer in the nodule-detected group compared to the non-nodule group. The approach of a two-stage modeling was then applied to determine how the observed nodules and the associated features can influence lung cancer diagnosis. In Stage 1, a Cox proportional hazards model was fitted at the participant level to assess if the presence of a nodule at baseline increases the hazards of developing lung cancer. The time-varying effect of the nodule type and other clinical variables were also evaluated in this stage. In Stage 2, a generalized linear mixed model was fitted on the observed nodules to determine how the associated nodule features can affect the probability of lung cancer diagnosis in the same lobe. Conclusions: Clinical and nodule features can be used to better stratify risk of lung cancer diagnosis and improve CT screening performance.

April 13, 2016

Wednesday

Jane Hyo Jin Lee

Stonehill College

1:00 PM

Textiles 105

"Molecular-scale Mechanisms of Electroporation"

Abstract: Electroporation, also known as electropermeabilization, is a membrane phenomenon involving breaching the integrity of the cell membrane by an externally applied electric field. Despite the fact that electroporation is used in many fields such as biology, biotechnology, and medicine, the molecular mechanism of pore creation has not been fully understood and the lack of understanding limits our ability to improve the electroporation protocols. In this talk, we propose a new theory that provides insight into initiation of pore formation in biological cell membranes when they are exposed to an external electric field. Using thorough structural and energetic analysis, we demonstrate that the formation of a pore is driven by the reorganization of the interfacial water molecules into energetically favorable structures.

Department of Mathematical Sciences and Data Science, WPI

3:00 PM

Textiles 105

"Conquering Big Data in Volatility Inference and Risk Management"

Abstract: The field of high-frequency finance has experienced a rapid evolution over the past few decades. One focus point is volatility modeling and analysis for big data setting. It plays a major role in finance and economics. In this talk, we focus on the statistical inference problem on large volatility matrix using high-frequency financial data, and propose a methodology to tackle this problem under various settings. We illustrate the methodology with the high-frequency price data on stocks traded in New York Stock Exchange in 2013. The theory and numerical results show that our approach perform well while pooling together the strengths of regularization and estimation from a high-frequency finance perspective. Bio: Jian Zou is assistant professor of statistics at Worcester Polytechnic Institute. He obtained PhD in statistics at University of Connecticut, and previously worked at the National Institute of Statistical Sciences, Duke University, and Indiana University Purdue University Indianapolis before Joining WPI. His research interests include high frequency financial time series, high dimensional statistical inference, biosurveillance, spatial Statistics and Bayesian analysis for risk management.

"A boundary integral equation method for quasi-periodic multilayered media"

Abstract: Many modern electronic/optical devices rely on wave such as solar cells, antennae, radar, and lasers. These devices are mostly built on a patterned layered structure. For optimizing and characterizing these devices, numerical simulations play a crucial role. We developed a robust and fast computational method based on boundary integral equations for Helmholtz equation in periodically patterned multilayered media. The new method uses near- and far-field decomposition to avoid using the quasi-periodic Greenâ€™s function. By construction, far-field contribution can be compressed using Schur complement with minimal computational cost. The new method solved the scattering from a 1000-layer with 300,000 unknown to 9-digit accuracy in 2.5 minutes on a workstation. In this talk, I will present the new formulation, fast solver, and numerical results. This is a joint work with Alex Barnett at Dartmouth College.

"Adaptive Mesh Refinement, High-Order Methods, and Time-Integration: on coalescing all these components for maximum efficiency"

Abstract: In this talk, I will describe the numerical methods that we have been developing for constructing accurate and efficient solutions for nonlinear systems of equations (e.g., shallow water and compressible Navier-Stokes, and more recently incompressible Navier-Stokes). We rely on local high-order methods for spatial accuracy as well as adaptive mesh refinement (AMR). We propose two types of AMR: h- and p- refinement. To evolve the equations forward in time, we need time-integrators but the question remains is how to design time-integrators that maintain the high-accuracy of our spatial discretization while allowing us sufficient efficiency to make these methods competitive with less-accurate methods? To this end, we rely on Implicit-Explicit (IMEX) methods and have been working recently on multi-rate methods. In this talk, I will show results on both accuracy (using simple tests or self-convergence studies) and efficiency (on two of the largest computers in the world). I will also show some sample simulations of our codes to motivate the need for this new modeling framework which we call NUMA. NUMA is a computational fluid dynamics (CFD) model that uses either continuous or discontinuous Galerkin methods in space, contains a large suite of explicit, IMEX, and fully-implicit time-integrators, and uses AMR. NUMA contains a two-dimensional version that runs serially on laptops, etc. (available for general distribution) as well as a three-dimensional version (with limited distribution) that is capable of running on both CPU-only computers as well as GPU-CPU computers.

"Efficient numerical methods for gravitational wave data science"

Abstract: The recent gravitational wave discovery marks the beginning of an exciting new era in science. To achieve the full scientific potential of these experiments, however, important and challenging computational issues must be addressed with new data analysis, mathematical and computational modeling strategies. A typical parameter estimation simulation, for example, can require days to years (theoretically estimated) to complete due to the length, complexity and high parametric dimensionality of the inference problem. These already long runtimes constitute best case scenarios, which use fast but approximate phenomenological models. To model an expected signal from first principles, a single high fidelity numerical simulation can require weeks on a supercomputer. In this talk, I will describe two strategies to overcome these computational bottlenecks. To enable fast Bayesian inference, I describe an algorithm for generating application- and data-specific reduced order quadratures capable of accelerating likelihood computations by 1 to 2 orders of magnitude. For the direct simulation of expected signals, I describe a new numerical code based on the discontinuous Galerkin method (the first of its kind in astronomy) with a task-based parallelization strategy, a promising combination that will allow for accurate and efficient simulations on petascale and exascale machines. These mathematical and computational tools are general and largely applicable in other areas of data science and scientific computing.

Abstract: Uncertainty is inevitable in computer-based simulations. To provide more reliable predictions for complex systems, quantifying the impact of uncertainty in simulations becomes critical. In this talk, we will focus on two of the main aspects of uncertainty quantification (UQ): model form epistemic UQ (backward UQ) and parametric epistemic UQ (forward UQ). Specifically, for model form UQ, physical constraints are incorporated into model correction process to enforce the important physical properties of the underlying system. Consequently, the corrected model preserves the desired properties and the estimation of model parameters can be improved significantly. For parametric UQ, we discuss the use of fuzzy sets in representing epistemic uncertainty and propose a numerical strategy to quantify the uncertainty in model output propagated through the simulation model. The numerical strategy is highly efficient since the only computational cost lies in the construction of a standard surrogate.

"Numerical solution of large-scale nonlinear algebraic Hermitian eigenproblems with a variational characterization"

Abstract: We consider a nonlinear algebraic Hermitian eigenproblem with a variational characterization, which forms a most natural generalization of linear Hermitian eigenproblems. Our main focus is on the development of efficient numerical algorithms using preconditioners for solving this type of problems. To compute extreme eigenvalues, several variants of preconditioned conjugate gradient (PCG) methods are proposed, and the global and asymptotic convergence of a basic variant of CG is established. The framework of PCG is then extended to give rise to a preconditioned locally minimal residual (PLMR) method for solving interior eigenvalues, which are intrinsically more difficult to find than the extreme ones. Stabilization of preconditioners, local convergence rates, and block extensions of PLMR are studied. Numerical experiments are provided to demonstrate the competitiveness of PCG and PLMR methods. In particular, proper combination of the two algorithms can be used to compute a large number of successive eigenvalues efficiently and reliably.

"New algorithms for iterative regularization and low rank matrix approximation with applications to large scale problems in geotomography and imaging"

Abstract: Inverse problems in geotomography and imaging applications present two distinct challenges: the need for effective algorithms and penalization strategies to resolve multi-scale model features and overcome the effects of ill-conditioning; and the need to be able to apply these algorithms to very large data sets.

To address these two challenges, we will first discuss novel iterative regularization algorithms using different penalization schemes, which are applicable to sparse wavelet-based model representations, useful for multi-scale data recovery. Then we will discuss novel randomized algorithms for rapidly constructing various low rank approximations to a matrix, without prior knowledge of its rate of singular value decay. We will then go on to discuss the application of low rank approximations to the construction of approximate but accurate regularized solutions and discuss the ongoing development of high performance numerical libraries for the new algorithms.

"Compatible meshless discretization through $\ell_2$-optimization"

Abstract: Meshless methods provide an ideal framework for scalably simulating problems involving boundaries undergoing large deformation or interfaces between multiple materials. Discretization points may be moved in a Lagrangian fashion without the need for either costly mesh topology updates or diffuse Eulerian treatment of interfaces. Of the range of meshless discretizations available, there is a distinct lack of methods that maintain a sense of compatibility while simultaneously achieving high-order accuracy. In this talk, we present a new discretization that generalizes staggered primal/dual discretizations to unstructured point sets. Using only the epsilon-neighborhood graph of discretization points and solving inexpensive optimization problems, we construct divergence and gradient operators that mimic the algebraic structure of compatible mesh-based discretizations. When applied to a model div-grad diffusion problem, we obtain high-order convergence for smooth solutions and observe monotone fluxes for problems with discontinuous material properties. We then present a new mixed meshless discretization for the Stokes equations, using a divergence-free moving least squares method for velocity and staggered moving least squares for the pressure. This approach achieves equal order convergence for both velocity and pressure, making it ideal for simulating problems in dense suspension flows dominated by lubrication forces. We finally assemble the Stokes solver, a Poisson-Boltzmann solver based on the staggered scheme, and a 6-DOF solver for colloid dynamics together into a monolithic, fully implicit scheme that we use to study electrophoretic suspensions. By using auxiliary space algebraic multigrid preconditioning to solve the resulting system, we obtain an efficient, robust, and highly accurate new tool for studying these problems in complex geometries.

Abstract: B-series have their origin in the study of Runge-Kutta methods. However, they have an important application in the analysis of general linear methods (multistage-multivalue methods). Each input and each stage is represented by a B-series and these are inter-related in terms of the coefficient matrices which define a specific method. In the case of Runge-Kutta methods (one-value methods) this analysis gives the standard conditions in terms of rooted trees and the corresponding elementary weights and tree factorials.