College of Liberal Arts and Sciences

Department of Mathematics

 

MWNA Day 2010

Abstracts

All attendees, including graduate students, are invited to present a contributed talk. However, the deadline for submitting abstracts is past, and we have filled all the available time slots.

Download the program and abstracts as a PDF file. Last updated Monday, April 19, 2010.

 

Plenary Talks

Efficient, Reliable, and Robust A Posteriori Error Estimators of Recovery Type

Zhiqiang Cai
Purdue University
Department of Mathematics
150 N. University Street
West Lafayette, IN 47907

Adaptive mesh refinement (AMR) algorithms are one of necessary tools for grand challenging problems in scientific comptuing. Efficient a posteriori error estimation is the key for success of AMR algorithms. Since Babuska's pioneering work in 1976, the a posteriori error estimation has been extensively studied. Impressive progress has been made during the past three decades. In this talk, I will first describe basic principles of existing error estimators for finite element approximations to elliptic partial differential equations. I will then introduce our recently developed a posteriori error estimators of recovery type. These efficient, reliable, and accurate estimators can be applied to nonlinear problems and problems of practical interests such as interface singularities, discontinuities in the form of shock-like fronts and of interior or boundary layers, and oscillations.

 

Discontinuous Galerkin Methods for Solving the Radiative Transfer Equation and Variational Inequalities

Weimin Han
Department of Mathematics
University of Iowa
Iowa City, IA 52242

Discontinuous Galerkin (DG) methods differ from the standard finite element methods in that functions are allowed to be discontinuous across the element boundaries. Advantages of DG methods include ease of using polynomial functions of different order in different elements, more flexibility in mesh refinements, and the locality of the discretization. DG methods have been applied to a wide range of partial differential equations.

In this talk, we discuss DG methods for solving the radiative transfer equation and for solving variational inequalities. We introduce the methods, perform consistency and stability analysis, derive error estimates, and present numerical results to show the performance of the methods. The talk is based on some recent joint works.

 

Asymptotic-preserving schemes for kinetic and hyperbolic problems with stiff sources

Shi Jin
Professor and Chair
Department of Mathematics
University of Wisconsin
Madison, WI 53706, USA

We propose a general time discrete framework to design asymptotic preserving schemes for the initial value problem of the Boltzmann kinetic and related equations. Numerically solving these equations is challenging due to the nonlinear stiff collision (source) terms induced by small mean free or relaxation time.

We propose to penalize the nonlinear collision term by a BGK-type relaxation term, which can be solved explicitly even if discretized implicitly in time. Moreover, the BGK-type relaxation operator helps to drive the density distribution toward the local Maxwellian, which naturally imposes an asymptotic-preserving scheme in the Euler limit.

The scheme so designed does not need any nonlinear iterative solver or the use of Wild Sum. It is uniformly stable in terms of the (possibly small) Knudsen number, and can capture the macroscopic fluid dynamic (Euler) limit even if the small scale determined by the Knudsen number is not numerically resolved. It is also consistent with the compressible Navier-Stokes equations if the viscosity and heat conductivity are numerically resolved. The method is applicable to many other related problems, such as hyperbolic systems with stiff relaxation, and high order parabolic equations.


Contributed Talks

Step-Flow During Deposition On Surfaces With Staircase Morphologies: Discrete 2D Deposition-Diffusion Equation Analysis

David M. Ackerman and J.W. Evans
Iowa State University
Ames, IA

We consider deposition on surfaces which exhibit a staircase morphology with single-atom high steps separated by broad atomically-flat terraces. Deposited atoms diffuse across terraces and attach to steps causing them to advance or flow. The traditional approach solves a continuum Burton-Cabrera-Frank (BCF)-type deposition-diffusion equation for each terrace with appropriate boundary conditions (BCs). The BCs incorporating ``kinetic coefficients'' reflecting the ease of attachment to steps and also possible detachment from steps.

We instead analyze discrete two-dimensional deposition-diffusion equations describing step flow for kinked steps at the atomic scale with BCs capturing the physical details of attachment and detachment. We focus on cases of facile attachment either just to two ascending steps, or to both ascending and descending steps. Our analysis indicates shortcomings of the traditional BCF boundary conditions where the kinetic coefficient is infinite, instead obtaining finite values depending on kink separation, terrace width, etc.

 

Using Groebner Basis Techniques to Design Runge-Kutta Formulas

Roger Alexander
Iowa State University
Ames, IA

This talk describes the application of Groebner basis techniques to solve the polynomial equations that determine coefficients for certain Runge-Kutta formulas.

This work has the objective of designing diagonally implicit Runge-Kutta formula pairs, with an integrator of order 4 and an error-estimator of order 5, both using the same six stages, such that the integrator is L-stable. These requirements lead to a system of 13 polynomial equations to determine 14 coefficients.

It is desirable to obtain solutions in explicit parametric form in order to optimize truncation errors and other measures of efficiency. To do this we use the technique of elimination term-ordering in the ring of polynomials in 14 variables. This technique makes it possible to eliminate groups of variables from the equations until all variables can be expressed as algebraic functions of a single one. The computations are done with the computer algebra system Singular, and can be completed in a few seconds.

 

Matrix-Free Iterative Methods for Cone Complementarity Problems appearing in Differential Variational Inequalities

Mihai Anitescu
Argonne National Laboratory
Mathematics and Computer Science Division
Argonne, IL 60439

Computational models involving differential variational inequalities (DVIs) occur in all applications whose dynamics is coupled with inequalities on the state variables. Nonetheless, the changing structure of the system due to the time variation in the active sets of the inequalities has made successful simulation or prediction for large-scale instances of such systems a rare occurrence. Perhaps nowhere is this more evident than in the study of dense granular materials: a system of rigid particles at high volume fraction that move subject to contact, collision and frictional constraints. Such materials are essential for pebble bed nuclear reactors, fluidized bed reactors used in coal gasification and refinery processes, and breeding blankets in fusion reactors.

The most popular computational approach for DVI involves smoothing of the inequality constraints coupled with an explicit numerical integration approach, which needs to take very small time steps. An alternative approach, co-developed by the principal investigator, uses a time-stepping scheme coupled with an exact treatment of the inequality constraints, which has resulted in the ability to take much larger time steps and, if the convex subproblem is solved efficiently, to use much less computing time compared to smoothing approaches.

This talk will present computational issues in resolving the cone-constrained optimization problems that appear in time-stepping methods. We describe an extension of Gauss-Seidel-type and Gauss-Jacobi-type iterative algorithms for symmetric convex linear complementarity problems to the cone complementarity case. We discuss the parallelization on graphics processing units (GPUs) of the iterations and of the collision detection step that precedes them. We present numerical results of high-density granular flow simulations, and we demonstrate the good agreement between the simulation output from time-stepping methods and experimental measurements.

 

A Spectral Method for an Elliptic Eigenvalue Problem

Kendall Atkinson
University of Iowa
Iowa City, IA

Let $\Omega$ be an open, simply connected, and bounded region in $\R^d$, $d > 1$, and assume it has a smooth boundary. Consider solving the eigenvalue problem $Lu=\lambda u$ for an elliptic partial differential operator $L$ over $\Omega$ with zero values for either Dirichlet or Neumann boundary conditions. We propose, analyze, and illustrate a 'spectral method' for solving numerically such an eigenvalue problem. This is an extension of spectral methods developed in earlier papers for solving elliptic problems with Dirichlet and Neumann boundary conditions. The approximations are polynomials in the spatial variables.

 

Asymptotically Exact A Posteriori Error Estimates for a One-dimensional Linear Hyperbolic Problem

Mahboub Baccouch
Department of Mathematics
University of Nebraska at Omaha
Omaha, NE 68182

In this presentation we investigate the global convergence of the discontinuous Galerkin method applied to solve one-dimensional hyperbolic problems. We show that the leading term of the discretization error for the solution of the $p$-degree discontinuous finite element solution is $O( h^{p+1})$ and is proportional to a $(p+1)-$ degree Radau polynomial. We show that the $p-$degree discontinuous finite element solution is $O(h^{p+2})$ superconvergent at Radau points. We apply the superconvergence results to prove that the {\it a posteriori} DG error estimates for a scalar linear hyperbolic problem in one dimension converge to the true spatial errors at $O(h^{p+5/4})$ rates while computational results show higher $O(h^{p+3})$ convergence rates. We further prove that the effectivity indices converge to unity at $O(h^{1/2})$ rates while numerically they exhibit $O(h^2)$ rates. In our analysis we established these convergence results under mesh refinement and fixed time $t$ and exact temporal integration.

 

Mesh Optimization based on Optimal Delaunay Triangulations

Long Chen
University of California
Irvine, CA

We shall derive several mesh optimization schemes based on Optimal Delaunay triangulations (ODTs). Desirable meshes are obtained by minimizing the interpolation error in the weighted $L^1$ norm. Our schemes are divided into local and global two class. For local schemes, several old and new schemes, known as mesh smoothing, are derived from our approach. For global schemes, a graph Laplacian is used as a preconditioner in the modified Newton method to speed up the local smoothing approach. Our work lay down the mathematical foundation on many mesh smoothing schemes used in practice and lead to a new global mesh optimization scheme. Numerical experiments indicate our methods produce a well shaped triangulation in a fast way.

 

Extensions of diffeomorphisms of small distortion in Euclidean and Hilbert space and their applications to rigid data alignment and medical imaging

Steven Damelin
Department of Mathematical Sciences
Georgia Southern University
Statesboro, GA
and
School of Computational and Applied Mathematics
University of Witwatersrand, South Africa

The subject of this talk is motivated by a fundamental result of Johnson and Lindenstrauss which says roughly that any $k\geq 1$ point subset of Euclidean space may be embedded into $O(\log k/\varepsilon^2)$ dimensions without distorting the distances between any pair of points by more than a factor of $(1\pm \varepsilon)$, for any $0<\varepsilon<1$.

Besides being of mathematical interest, the result above has important uses in several applied computer vision subjects such as bi-Lipschitz embeddings of graphs into normed spaces, compressed sensing, manifold learning and dimensionality reduction. For example, data stored and manipulated on computers, including text and images, as is well known, may be represented as points in a high-dimensional space. However, the essential algorithms for working with such data tend to become bogged down very quickly as dimension increases. This phenomenon is often referred to as the "`curse of dimensionality"' It is therefore desirable to reduce the dimensionality of the data in a way that preserves its relevant structure.

Throughout this talk, we work in $\mathbb R^D$ where $D\geq 1$ is fixed. We are interested in the following converse question which up until now has been wide open. Let $\varepsilon>0$ and suppose we are given sets of distinct labeled points of size $k$, $E:=y_1,...,y_k$ and $E':=z_1,...,z_k$ in $\mathbb R^{D}$. Assume that $|y_i|=1$ for all $i$ and that the points are distorted by a aprori fixed amount, ie for all $i,j$

\[ (1-\varepsilon)\leq \frac{|y_i-y_j|}{|z_i-z_j|}\leq (1+\varepsilon). \]

Do there exist rigid motions $\Phi$ which approximately or exactly align the the two sets of labeled points? We will show how in recent work with Charles Fefferman and we solve this problem completely.

We will also discuss how our results may be used in numerous applications in medical and hyperspectral imaging.

This is joint work with Charlie Fefferman, Princeton, and Willard Miller, UMN

 

The Singular Value Decomposition for Orthogonally Decomposable Tensors

Ken Driessel
Iowa State University
Ames, IA

The singular value decomposition (SVD) has been used extensively in engineering and statistical applications. There is a need to generalize this decomposition to higher order tensors.

Let $\R^{n}$ denote the vector space of real $n$-tuples and let $E^{n}$ denote $\R^{n}$ with the standard inner product: $\<x,y\> := \sum x_{i} y_{i}$. For positive integers $l$, $m$ and $n$, let $E$ denote the tensor product of $E^{l}$, $E^{m}$ and $E^{n}$, that is, $E := E^{l} \otimes E^{m} \otimes E^{n}$. Recall that $E$ has a naturally defined inner product: $\< u \otimes v \otimes w, x \otimes y \otimes z \> := \< u,x\>\<v,y\>\<w,z\>$. Elements of $E$ are {\bf three-way tensors} or {\bf tensors with order three}.

Let $X$ be a tensor in $E$. Then $X$ is {\bf decomposable} if there exist vectors $x \in E^{l}$, $y \in E^{m}$ and $z \in E^{n}$, such that $X = x \otimes y \otimes z$. Let $U := u \otimes v \otimes w$ and $X := x \otimes y \otimes z$ be decomposable tensors. Then $U$ and $X$ are {\bf completely orthogonal} if $\<u,x\> =0$, $\<v,y\> =0$, and $\<w,z\> =0$.

Let $A$ be a tensor in $E$ and let $r$ be a positive integer. Then $A$ has a {\bf rank $r$ orthogonal decomposition} if $A$ is the sum of $r$ decomposable tensors that are pairwise completely orthogonal. A tensor is {\bf orthogonally decomposable} if it has a rank $r$ orthogonal decomposition for some $r$.

There are (many) tensors that are not orthogonally decomposable. Nevertheless, this class of tensors is an important one. For example, it is used for independent component analysis.

Let $U = u \otimes v \otimes w$ be a decomposable tensor. Then $U$ is {\bf completely normal} if $\|u\| =1$, $\|v\| =1$ and $\|w\| =1$. Let $U_{1}, \ldots{}, U_{r}$ be tensors in $E$. Then the set $\{ U_{1}, \ldots{}, U_{r}\}$ is {\bf completely orthonormal} if each $U_{i}$ is completely normal and, for all $i,j =1,\ldots{},r$, if $i = j$ then $U_{i}$ and $U_{j}$ are completely orthogonal.

Let $S^{k} := \{u \in E^{k} : \|u\| =1\}$ denote the unit sphere in $E^{k}$. The following result generalizes the SVD.

{\bf Theorem 1.} {\em Let $A$ be a tensor in $E$. If $A$ has a rank $r$ orthogonal decomposition, then there exist nonnegative scalars $\sigma_{1},\sigma_{2},\ldots{},\sigma_{r}$ and a completely orthonormal set $\{U_{1}, U_{2}, \ldots{}, U_{r}\}$ of tensors in $E$ such that

\[ A = \sigma_{1} U_{1} + \sigma_{2} U_{2} + \cdots \sigma_{r} U_{r}. \]

The scalars $\sigma_{i}$ are uniquely determined by $A$. In fact, these scalars are the critical values of the function $f(u,v,w):= \<u \otimes v \otimes w,A\>$ defined on the set $S^{l} \times S^{m} \times S^{n}$.}

For simplicity, the results given above were stated for three-way tensors. However,these results easily generalize to higher order tensors.

 

A Posteriori Error Estimates of Discontinuous Galerkin Methods for Obstacle Problems

Joe Eichholz
University of Iowa
Iowa City , ia

We present a posteriori error analysis of discontinuous Galerkin methods for solving the obstacle problem, which is a representative elliptic variational inequality of the first kind. Analysis indicates that the given residual type error estimators are reliable and efficient. Some numerical examples are reported.

 

An all-speed asymptotic-preserving scheme for the low Mach number limit of the isentropic Euler and Navier-Stokes equations

Jeffrey Haack
University of Wisconsin
Madison, wi

In this talk, I will present an asymptotic preserving (AP) numerical scheme for the low Mach number limit of the compressible isentropic Euler and Navier-Stokes equations that is uniformly stable and accurate for all Mach numbers. Numerical computation in this limit is challenging due to the expensive time and space resolution constraints needed for the stiff acoustic waves in the system, which are not important in the limit. We propose splitting the system into two parts, one of which is a linear system that contains the stiff acoustic dynamics. This implicit system automatically turns into an incompressible projection step when the Mach number is small, capturing the correct incompressible dynamics. I will present numerical results in one and two dimensions supporting these results.

 

Kinetic Flux Vector Splitting Schemes for Quantum Hydrodynamics

Jingwei Hu and Shi Jin
University of Wisconsin
Madison, WI, 53706

KFVS scheme was first introduced by Pullin and Deshpande in the 1980s for the classical Euler equations. A direct generalization to quantum Euler equations was done by Yang, Hsieh and Shi in 2007, which requires the integration of the quantum Maxwellian (Bose-Einstein and Fermi-Dirac distributions). To further obtain the macroscopic quantities (e.g. temperature and fugacity), a nonlinear 2 by 2 system needs to be inverted at every spatial point and every time step, which is very computational intensive and sometimes even impossible because of the singularity of the function. Here a simpler KFVS scheme is proposed for the quantum hydrodynamics with a cost comparable to a classical KFVS. The proposed schemes are tested with the 1d shock tube problems for Bose and Fermi gases in both classical and nearly degenerate regime.

 

Anisotropic Mesh Adaptation Method Based Upon a posteriori Error Estimates

Lennard Kamenski
University of Kansas
Lawrence, KS

An anisotropic mesh adaptation strategy for the finite element solution of elliptic differential equations is considered. The method generates anisotropic adaptive meshes as quasi-uniform ones in some metric space. The associated metric tensor is computed by means of a posteriori hierarchical error estimates. In this study, a global hierarchical error estimate is employed to obtain reliable directional information of the solution. Numerical examples are presented for the mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

 

Traveling Wave Solutions to Coupled Spatially Discrete Reaction Diffusion Equations

Charles Lamb
University of Kansas
Lawrence, KS

We will discuss the existence of traveling wave solutions to a system of implicitly defined mixed type functional differential equations. Traveling wave solutions to the one-dimensional discrete Nagumo equation are known to exist. By linearizing about known solutions, the implicit function theorem will be invoked to give the existence of a solution to the weakly coupled system. These solutions have been shown by others to be stable for small delay values. The associated linear theory will be discussed as necessary.

 

A Two-Scale Multiple Scattering Problem

Peijun Li
Purdue University
West Lafayette, IN

We consider the scattering problem of a time-harmonic plane wave incident on a heterogeneous medium consisting of isotropic point (small sale) scatterers and an extended (wavelength comparable) obstacle scatterer in three-dimensional space. To compute the scattered field from the interaction between the incident wave and the point scatterers only, the Foldy--Lax method provides an effective approach; while boundary integral equation methods play an important role for solving the scattering problem solely involving an extended obstacle scatterer. It is a challenging two-scale multiple scattering problem when both the point scatterers and the extended obstacle are present. We developed a generalized Foldy--Lax method to fully take account of the multiple scattering in the heterogenous medium. The method is viewed from two different formulations: series solution and integral equation. The series solution formulation is shown as an efficient iterative scheme to the integral equation formulation. The convergence of the scattered fields and the far-field patterns from the series solution formulation are characterized in terms of scattering coefficients. Numerical experiments will be presented to show the agreement and the effectiveness of the proposed two approaches.

This is joint work with Kai Huang at Florida International University.

 

An Anisotropic Mesh Adaptation Method for the Finite Element Solution of Heterogeneous Anisotropic Diffusion Problems

Xianping Li
University of Kansas
Lawrence, KS

Abstract: Heterogeneous anisotropic diffusion problems arise in the various areas of science and engineering including plasma physics, petroleum engineering, and image processing. Standard numerical methods can produce spurious oscillations when they are used to solve those problems. A common strategy to avoid the difficulty is to employ a scheme satisfying the discrete counterpart of the maximum principle (DMP). Numerical schemes of this type can be obtained by utilizing a special mesh or a special discretization. For isotropic problems, the well-known non-obtuse angle condition for a mesh guarantees the validation of DMP for a linear finite element solution. Tremendous research has been done for anisotropic problems, which can improve but cannot guarantee the validation of DMP.

In this talk we are concerned with the linear finite element solution of heterogeneous anisotropic diffusion problems. A generalization of the non-obtuse condition is developed to guarantee the satisfaction of DMP for anisotropic diffusion problems. Several variants of the new condition are obtained. Based on one of them, two metric tensors for use in anisotropic mesh generation are developed to account for DMP satisfaction and the combination of DMP satisfaction and mesh adaptivity. Numerical examples are given to demonstrate the features of the method.

 

Hybrid Runge-Kutta and BFGS Algorithms

Darin Mohr
University of Iowa
Iowa City, IA

Given an starting point, finding a local minimizer in unconstrained nonlinear optimization and a fixed point of a gradient system of ordinary differential equations (ODEs) are two closely related problems. Quasi-Newton algorithms are widely used in unconstrained nonlinear optimization while Runge-Kutta methods are widely used for the numerical integration of ODEs. In this work we consider hybrid algorithms combining low order implicit Runge-Kutta methods for gradient systems and quasi-Newton type updates of the Jacobian matrix such as the BFGS and SR1 updates. We have extended these ideas to the limited memory BFGS algorithm and we have examined the performance of the hybrid algorithms on a variety of problems.

 

Multi-level Algorithms for Infinite-dimensional Integration on RN

Ben Niu
Illinois Institute of Technology
Chicago, IL

Pricing a path-dependent financial derivative, such as an Asian option, requires the computation of $E(g(B))$, the expectation of a payoff function $g$, that depends on a Brownian motion $B$. Employing a standard series expansion of $B$, the latter problem is equivalent to the computation of the expectation of a function of the corresponding i.i.d.~sequence of random coefficients. This motivates the construction and the analysis of algorithms for numerical integration with respect to a product probability measure on the sequence space $\R^{N}$. The class of integrands studied in this paper is the unit ball in a reproducing kernel Hilbert space obtained by superposition of weighted tensor product spaces of functions of finitely many variables. Combining tractability results for high-dimensional integration with the multi-level technique we obtain new algorithms for infinite-dimensional integration. These deterministic multi-level algorithms use variable subspace sampling and they are superior to any deterministic algorithm based on fixed subspace sampling with respect to the respective worst case error. Numerical experiments will be implemented.

This is joint work with Fred J. Hickernell, Thomas Müller-Gronbach and Klaus Ritter.

 

Quadrature Method of Moments for Gas-Particle Flows and their Extension to the Dense Limit

Alberto Passalacqua and Rodney O. Fox
Department of Chemical and Biological Engineering
Iowa State University, Ames

High-order quadrature-based moment methods represent an advanced and practical simulation tool for gas-particle flows, due to their capability of predicting non-equilibrium phenomena typically captured by Lagrangian models, but not correctly described by the Euler-Euler hydrodynamic approach. In addition, moment methods have a lower computational cost and are not affected by statistical noise of Lagrangian simulations.

The transport equations for the moments are hyperbolic in nature, if the particle phase volume fraction is lower than the particle packing limit, representing the maximum particle concentration physically achievable due to particle finite-size. According to the Boltzmann-Enskog kinetic equation, the collision term originates a collisional contribution to the moment spatial fluxes, which becomes responsible of enforcing such a limit. When the phase volume fraction reaches its maximum value, however, the hyperbolicity of the set of equations is lost, causing the phase to become incompressible. A direct solution of the moment transport equations in such a situation leads to severe numerical instabilities or extremely restrictive conditions on the time integration.

In this talk, the extension of quadrature-based moment methods to the case of dense flows is discussed, defining the collisional contribution to the moment spatial fluxes, and an approach based on the solution of a Poisson equation in the dense limit is presented.

 

An Eulerian Surface Hopping Method for the Schrödinger Equation With Conical Crossings

Peng Qi
University of Wisconsin
Madison, WI

In a nucleonic propagation through conical crossings of electronic energy levels, the codimension two conical crossings are the simplest energy level crossings, which affect the Born-Oppenheimer approximation in the zeroth order term. We developed the surface hopping method for the Schrödinger equation with conical crossings in the Eulerian formulation. The approach is based on the semiclassical approximation governed by the Liouville equations, which are valid away from the conical crossing manifold. At the crossing manifold, electrons hop to another energy level with the probability determined by the Landau-Zener formula. This hopping mechanics is formulated as an interface condition, which is then built into the numerical flux.

 

Approximate Fourier Solutions of Nonlinear Stochastic Heat Equations With Q-Regular Additive Noise

Henri Schurz
Southern Illinois University
Carbondale, IL

The approximation of strong solutions of stochastic heat equations with nonlinear and random space-time perturbations is discussed. The existence of unique solutions with homogeneous boundary and square integrable initial conditions is shown. For this purpose, we exploit the techniques of Fourier series solutions, Lyapunov-functions and monotone operators. The related Fourier coefficients are computed by nonstandard numerical methods to control the qualitative behavior of associated energy functional in a consistent manner.

 

Backward Error Analysis for Gauss-Lobatto SPARK Methods Applied to Constrained Hamiltonian Systems

Scott Small
University of Iowa
Iowa City, IA

We consider the application of the family of Gauss-Lobatto specialized partitioned additive Runge-Kutta (SPARK) methods to Hamiltonian and Lagrangian systems with constraints. In particular we present a backward error analysis for these methods applied to Hamiltonian systems with holonomic constraints. Taking advantage of the preservation of the symplectic structure by these methods, we use the theory of generating functions to construct a modified Hamiltonian system. We also present numerical evidence suggesting the preservation of a modified Hamiltonian for systems with nonholonomic constraints.

 

Fast Algorithms for Kernel-based Scattered Data Approximation

Guohui Song
Illinois Institute of Technology
Chicago, IL

Scattered data approximation deals with the problem of reconstructing an unknown function from given scattered data. It has applications in a variety of fields such as surface construction, the numerical solution of partial differential equations, statistical learning, and parameter estimation. A popular approach in scattered data approximation is the kernel-based regularization method that consists of calculating the inverse of a matrix generated by a kernel function and the given data. However, the computational cost of inverting this matrix is a major concern especially for high-dimensional data. We introduce some fast algorithms for calculating the inverse by approximating the kernel matrix with a related multilevel circulant matricx so that the fast Fourier transform can apply to reduce the computational cost. A super fast algorithm is also provided for high-dimensional tensor-product kernels.

This is joint work with Dr. Yuesheng Xu at Syracuse University.

 

A Variation of the Direct Discontinuous Galerkin Method for Diffusion Problem with Symmetric/Antisymmetric Structure

Chad Vidden
Iowa State University
Ames, IA

Based on a novel numerical flux involving jumps of even order derivatives of the numerical solution, in [1] Liu and Yan derived a direct discontinuous Galerkin(DDG) method for diffusion problems. In [2] they showed that higher order ($k \geq 4$) derivatives in the numerical flux can be avoided if some interface corrections are included in the weak formulation of the DDG method; still the jump of 2nd order derivatives is shown to be important for the method to be efficient with a fixed penalty parameter for all $p^k$ elements. The refined DDG method with such numerical fluxes have the optimal $(k+1)$th order of accuracy. In this talk, I will present the further studies on the method. We include more interface terms in the scheme formulation and obtain a new scheme with either symmetric or anti-symmetric structures which will help us to further study the $L^2$ error analysis of the DG solution.

[1] H. Liu and J. Yan, The Direct Discontinuous Galerkin (DDG) Method for Diffusion Problems, SIAM J. Numer. Anal. 47(1)(2009),475--698.

[2] H. Liu and J. Yan, The Direct Discontinuous Galerkin (DDG) Method for Diffusion with Interface Corrections, Communications in Computational Physics. To appear.

 

A Level Set Approach for Dilute Fluid-Particle Flows

Zhongming Wang
University of California
San Diego , CA

Gas-particle and other dispersed-phase flows can be described by a kinetic equation containing terms for spatial transport, acceleration, and particle processes (such as evaporation or collisions). However, computing the dispersed velocity is a challenging task due to the large number of independent variables. A level set approach for computing dilute non-collisional fluid-particle flows is presented. We will consider the sprays governed by the Williams kinetic equation subject to initial distributions away from equilibrium of the form $\sum_{i=1}^N \rho_i(x)\delta(\xi-u_i(x))$. The dispersed velocity is described as the zero level set of a smooth function, which satisfies a transport equation. This together with the density weight recovers the particle distribution at any time. Moments of any desired order can be evaluated by a quadrature formula involving the level set function and the density weight. It is shown that the method can successfully handle highly non-equilibrium flows (e.g. impinging particle jets, jet crossing, particle rebound off walls, finite Stokes number flows).

This is joint work with Hailiang Liu and Rodney Fox.

 

A Level Set Method for the Semiclassical Limit of the Schrödinger Equation With Discontinuous Potentials

Dongming Wei
University of Wisconsin
Madison, WI

We propose a level set method for the semiclassical limit of the Schrödinger equation with discontinuous potentials. The discontinuities in the potential corresponds to potential barriers, at which incoming waves can be partially transmitted and reflected. Previously such a problem was handled by Jin and Wen using the Liouville equation--which arises as the semiclassical limit of the Schrödinger equation--with an interface condition to account for partial transmissions and reflections. However, the initial data are Dirac-delta functions which are difficult to approximate numerically with a high accuracy. In this talk, we extend the level set method introduced in (S. Jin, H. Liu, S. Osher and R. Tsai) for this problem. Instead of directly discretizing the Delta functions, our proposed method decomposes the initial data into finite sums of smooth functions that remain smooth in finite time along the phase flow, and hence can be solved much more easily using conventional high order discretization schemes.

 

A new superfast and stable Toeplitz solver

Jianlin Xia
Purdue University
West Lafayette, IN

Toeplitz matrices arise in many applications such as PDE solution, signal and image processing, and time series analysis. In this talk, we will discuss a new stable and superfast solver for Toeplitz linear systems. With the displacement structure, our method solves the Cauchy-like system converted from the Toeplitz system in nearly O(N) flops with a relatively small constant. Fast strong rank revealing LU factorizations are used in a hierarchical scheme to work on the Cauchy-like matrix. Two layers of structured representations are used: an outer layer HSS structure, and an inner Cauchy-like structure for each dense HSS generator. The method is significantly faster than some existing superfast Toeplitz solvers such as a previous structured solver by the authors in [SIMAX, 2007]. Numerical tests on various highly ill-conditioned examples indicate both the efficiency and the stability.

This is joint work with Ming Gu at UC Berkeley.

 

Approximating Stable and Unstable Manifolds of 2D Stochastic Dynamical Systems via Reproducing Kernels

Qi Ye
Illinois Institute of Technology
Chicago, IL

Many people are interested in finding the manifolds that reduce dynamical systems. However it is still a difficult task to compute the stable and unstable manifolds of a stochastic dynamical system. In this talk, we will introduce a new method to approximate the local stable and unstable manifolds of a two-dimensional stochastic dynamical systems, i.e.

\begin{cases}
dU=b(U,V)dt+\sigma(U,V)dW_t^{(1)},\\
dV=r(U,V)dt+\delta(U,V)dW_t^{(2)},
\end{cases}

where $W_t^{(1)}$ and $W_t^{(2)}$ are independent standard Brownian motions. The key point of this method is to apply techniques from the support vector machine community via reproducing kernels. Moreover, all the results can be extended into high-dimensional space.

 

A Fast Accurate Boundary Integral Method for the Laplace Equation

Wenjun Ying
Michigan Technological University
Houghton, MI

Boundary value and interface problems for the Laplace equation are often solved by boundary integral methods due to the reduction of dimensionality and its flexibility in domain geometry. However, there are two well-known computational issues with the boundary integral method: (a) evaluation of boundary integrals at points close to domain boundaries usually has low order accuracy; (b) the method typically yields dense coefficient matrices in the resulting discrete systems, which makes the matrix vector multiplication very expensive when the size of the system is very large. In this talk, I will describe a correction method for high order accurate evaluation of the boundary integrals and a fast multipole method for the dense matrix vector multiplication. Numerical results demonstrating the efficiency and accuracy of the method will be presented.

 

On-Line Strategies for Control Architectures using Differential Variational Inequality Techniques

Victor M. Zavala and Mihai Anitescu
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne IL, 60439

The main objective of a control architecture is to drive and keep a system at operating points of maximum economic performance as disturbances and prices change in time. State-of-the-art control architectures use detailed physical models in order to forecast the effect of manipulated variables on the system performance. This enables the handling of complex nonlinear behaviors, multivariable interactions, and operational constraints in a systematic manner. Control architectures include a state estimation or data assimilation agent that reconstructs the state of the system based on limited measurements, an economic optimization agent that determines the economic optimum operating point based on current price information, and a control or tracking agent that drives and keep the system at the optimum points. All these agents rely on repetitive solutions of large-scale nonlinear programming (NLP) problems with hundreds of thousands to millions of variables and equality and inequality constraints. A challenge that arises in the implementation of advanced control architectures is that they require of high frequency solutions in order to obtain adequate closed- loop performance and stability. Because of this, off-the-shelf NLP solvers are limited to systems with slow dynamics or can only handle simplified models. In this work, we present NLP strategies targeted to control architectures that enable the use of detailed models at high frequencies.

A key property arising in control architectures is the fact that the structure of the NLPs remains fixed and depends on time-dependent data. Consequently, solutions obtained at subsequent time steps form a continuous and non-smooth manifold. Continuity arises under local invertibility of the first-order Karush Kuhn-Tucker (KKT) conditions. This property is also known as strong regularity and generalizes the implicit function theorem for nonlinear equations. Non-smoothness arises due to changing active sets. We establish results for the problem of tracking this type of manifolds by casting the first-order KKT conditions as a parametric generalized equation. In turn, this gives the problem a structure similar to the one of Differential Variational Inequalities (DVI) and allows one to approach the problem using time stepping approaches that recently proved so successful for DVI. Our results are divided in two parts. First, we demonstrate that if points along a solution manifold are consistently strongly regular, it is possible to track the manifold approximately by solving a single linear complementarity problem (LCP) per time step. We derive sufficient conditions that guarantee that the tracking error remains bounded to second order with the size of the time step and even if the LCP is solved only approximately. These results pave the way to a new family of NLP strategies. Second, we derive a particular strategy where the NLP is reformulated using an augmented Lagrangian penalty function. This permits the use of a matrix-free, projected successive over-relaxation (PSOR) algorithm to solve the LCP at each time step. We demonstrate that PSOR is particularly attractive in control architectures because it can perform linear algebra and active-set identification tasks quickly and efficiently.

We demonstrate the developments through a simple numerical case study and discuss future applications including energy management in power plants, refineries, and buildings; voltage and power flow monitoring and control in transmission systems; and optimal start-up and shutdown of power plants.

 

Sequential Monte Carlo Method in Chemical Safety Assessment

Xiaoyan Zeng
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, IL

In practice, samples from a probability conditioned on a large amount of history observation datum are required. Due to the high dimensionality and complexity of the distribution, sequential important sampling, resampling (SMC) methods are used. Rather than sample from the targeted distribution once, it generates a sample step by step. In this talk, we will introduce the method and use it to do safety assessment of chemical plant.