SIAM Undergraduate Research Online

Volume 7


The Krein Matrix and an Interlacing Theorem

Published electronically January 16, 2014
DOI: 10.1137/13S01256X

Authors: Shamuel Auyeung, Eric Yu, (Calvin College, Grand Rapids, MI)
Sponsor: Todd Kapitula (Calvin College, Grand Rapids, MI)

Abstract: Consider the linear general eigenvalue problem Ay = λBy, where A and B are both invertible and Hermitian N × N matrices. In this paper we construct a set of meromorphic functions, the Krein eigenvalues, whose zeros correspond to the real eigenvalues of the general eigenvalue problem. The Krein eigenvalues are generated by the Krein matrix, which is constructed through projections on the positive and negative eigenspaces of B. The number of Krein eigenvalues depends on the number of negative eigenvalues for B. These constructions not only allow for us to determine solutions to the general eigenvalue problem, but also to determine the Krein signature for each real eigenvalue. Furthermore, by applying our formulation to the simplest case of the general eigenvalue problem (where B has one negative eigenvalue), we are able to show an interlacing theorem between the eigenvalues for the general problem and the eigenvalues of A.

A Stochastic Version of the Pedersen-Sherman Insulin Secretion Model

Published electronically January 29, 2014
DOI: 10.1137/12S011994

Author: Ankit Dwivedi, Indian Institute of Science Education and Research - Pune (IISER- Pune)
Sponsor: Pranay Goel, Indian Institute of Science Education and Research - Pune (IISER- Pune)

Abstract: Pedersen and Sherman have recently developed a multi-pool model of insulin vesicle secretion from pancreatic beta-cells [12]. In the Pedersen-Sherman model, pool sizes are reported as concentrations; however, concentrations of the different pools vary by as much as seven orders of magnitude. Very low concentrations indicate there could be discrete numbers of vesicles in some of the pools, leading naturally to the questions regarding stochasticity in the signalling pathway. We therefore simulate a discrete counterpart of the deterministic model with a (hybrid) Gillespie-style stochastic simulation algorithm. The stochastic simulations are calibrated to the deterministic model in the mean. We estimate the variances in pool sizes numerically and show it closely tracks the mean, indicating a Poisson-like result. Our numerical results demonstrate that the mean behavior indicated in the Pedersen-Sherman model is evident with just one islet.

Locally Chosen Grad-Div Stabilization Parameters for Finite Element Discretizations of Incompressible Flow Problems

Published electronically March 6, 2014
DOI: 10.1137/14S012786

Author: Nathan D Heavner (Clemson University)
Sponsor: Leo G Rebholz (Clemson University)

Abstract: Grad-div stabilization has recently been found to be an important tool for finite element method simulations of incompressible flow problems, acting to improve mass conservation in solutions and reducing the effect of the pressure error on the velocity error. Typically, the associated stabilization parameter is chosen globally, but herein we consider local choices. We show that, both for an analytic test problem and for lift and drag calculations of flow around a cylinder, local choices of the grad-div stabilization parameter can provide more accurate solutions.

A High Order Method for Three Phase Flow in Homogeneous Porous Media

Published electronically March 18, 2014
DOI: 10.1137/13S012637

Author: Justin Dong (Rice University)
Sponsor: Beatrice Riviere (Rice University)

Abstract: The modeling of three-phase fluid flow has many important applications in reservoir simulation. In this paper, we introduce a high order method for sequentially solving the phase pressure-saturation formulation of three-phase flow. The sequential approach, while less stable than a fully coupled approach, is more computationally efficient. We present a discontinuous Galerkin method in space, suitable for the sequential approach due to its high-order accuracy. We consider coarser meshes with high degree polynomials to minimize the dimension of the system while maximizing accuracy. Numerical results are given for homogeneous media.

Analysis and Simulation of the Three-Component Model of HIV Dynamics

Published electronically March 25, 2014
DOI: 10.1137/13S012698

Authors: Eric Jones and Peter Roemer (Colorado School of Mines, Golden, CO)
Sponsors: Stephen Pankavich and Mrinal Raghupathi (Colorado School of Mines, Golden, CO)

Abstract: Mathematical modeling of biological systems is crucial to effectively and efficiently developing treatments for medical conditions that plague humanity. Often, systems of ordinary differential equations are a traditional tool used to describe the spread of disease within the body. We consider the dynamics of the Human Immunodeficiency Virus (HIV) in vivo during the initial stages of infection. In particular, we examine the well-known three-component model and prove the existence, uniqueness, and boundedness of solutions. Furthermore, we prove that solutions remain biologically meaningful, i.e., are positivity preserving, and perform a thorough, local stability analysis for the equilibrium states of the system. Finally, we incorporate random coefficients into the model, selected from uniform, triangular, and truncated normal probability distributions, and obtain numerical results to predict the probability of infection given the transmission of the virus to a new individual.

Determining Critical Locations in a Road Network

Published electronically April 2, 2014
DOI: 10.1137/13S012595

Authors: Louis Joslyn, Casey Croson, and Sara Reed (Simpson College)
Sponsor: Debra Czarneski (Simpson College)

Abstract: Critical locations in infrastructure are roads that, if damaged, would cause a large disruption in the ability of vehicles to navigate a city. In this paper, we consider critical locations in the road network of Indianola, Iowa. The presence of cut vertices and values of betweenness for a given road segment are used in determining the importance of that given road segment. We present a model that uses these critical factors to order the importance of separate road segments. Finally, we explore models that focus on betweenness to improve accuracy when discovering critical locations.

A Comparison of Common Methods for Optimal Well Placement

Published electronically May 2, 2014
DOI: 10.1137/13S012510

Author: Jeremy J. Minton (University of Auckland, New Zealand)
Sponsor: Rosalind Archer (University of Auckland, New Zealand)

Abstract: The well placement problem is challenging due to the non-linear, discrete and often multi-modal objective function. This is complicated by computationally expensive function evaluations from reservoir simulation, typically producing no gradient information. Heuristics for automated optimisation have been proposed over the past 20 years with minimal comparison or benchmarking of performance. This paper presents a comparison of common methods, including genetic algorithms, simulated annealing, particle swarm optimisation and variants of the hill climbing algorithms. These algorithms were tested in the context of locating a single production or injection well in two reservoir cases. For this class of problem, the genetic algorithm produced good results after 100 evaluations. The particle swarm method performed only slightly worse but was able to improve considerably with the use of ‘educated guesses’ seeding it’s initialisation. For this reason the particle swarm optimiser is arguably the better method for industrial implementation as some idea of optimality would exist through intuition and experience. A recommendation for further development is the investigation of objective function approximations for initialisation seeding as well as sequentially combining algorithms, such as particle swarm and simulated annealing, to find a combination of explorative then exploitative searching. The validity of these results are limited, however, due to the small sample size and to single well problems. In fact, some findings were contradictory to previously published guidelines which illustrates the need for this type of comparison: understanding an algorithm’s performance under given conditions is necessary in leveraging its benefits, particularly with such demanding problems such as well placement.

Topological Entropy Bounds for Hyperbolic Plateaus of the Hénon Map

Published electronically May 16, 2014
DOI: 10.1137/13S012716

Author: Rafael M. Frongillo  (University of California at Berkeley)
Sponsor: John Smillie (Cornell University)

Abstract: Combining two existing rigorous computational methods, for verifying hyperbolicity (due to Arai) and for computing topological entropy bounds (due to Day et al.), we prove lower bounds on topological entropy for 43 hyperbolic plateaus of the Hénon map. We also examine the 16 area-preserving plateaus studied by Arai and compare our results with related work. Along the way, we augment the algorithms of Day et al. with routines to optimize the algorithmic parameters and simplify the resulting semi-conjugate subshift.

Epidemiology of MdSGHV in Musca domestica

Published electronically July 9, 2014
DOI: 10.1137/13S012376

Author: Celeste Vallejo (University of Florida)
Sponsor: James Keesling (University of Florida)

Abstract: Musca domestica salivary gland hypertrophy virus (MdSGHV) is a disease that enlarges the salivary glands of Musca domestica (the common house fly), as well as causes infertility in female house flies. An infected female fly will no longer produce or lay eggs and will refuse a male’s mating attempts. Collaborating with the University of Florida’s Entomology and Nematology Department, we have hypothesized that the virus is primarily transmitted through male-male interactions. The constant interaction between males causes damage to the flies’ cuticle. When the virus is present on the cuticle, this damage allows the virus entry into the hemocoel, which subsequently infects the internal organs and in particular the salivary glands. The virus adheres to the fly cuticle at contaminated resting sites and contaminated feeding sites. Infection per os is another possible mode of transmission but insufficient to sustain the virus in the fly population. For females, MdSGHV seems to be transmitted largely per os with random occurrences of cuticular damage that take place at the contaminated food and resting sites. There is no evidence that females exhibit aggressive behavior, or that mating causes cuticular damage. Vertical transmission does not seem to be a mode of transmission. We developed a system of differential equations to model the transmission of MdSGHV for a population of Musca domestica at Florida dairy farms. The model is based on the above understanding of the transmission of the virus together with some other simplifying assumptions. A subsequent experiment verified our assumed mode of transmission. This paper illustrates the utility of mathematics in solving problems in other disciplines.

Spatial-Temporal Modeling of Active Layer Thickness

Published electronically July 16, 2014
DOI: 10.1137/14S012798

Author: Qian Chen (The George Washington University)
Sponsor: Tatiyana Apanasovich (The George Washington University)

Abstract: The objective of this study is to provide the methodology to model and estimate spatial-temporal variation in the active layer thickness (ALT) at the U1 Barrow site of the Circumpolar Active Layer Monitoring network, and to demonstrate its use in spatial-temporal interpolation. Specifically, we use 18 years of data (1995|2012) collected on 11 by 11 square grid of locations separated by 100 meters to build the model. Then, we use the data collected in 2013 to demonstrate the validity and predictive power of our methodology. In our study, we propose two models that provide a realistic description of space-time variability in ALT. At the same time, these models are feasible to efficiently estimate model parameters from available data. Specifically, we adopt linear modeling approach. The main modeling difficulties lie in defining a deterministic trend that represents the large scale spatial and temporal variation, and a realistic stochastic model that characterizes the space-time dependency of the residuals. Formulations that take into account interactions among spatial and temporal components are also developed. Fitting the space-time geostatistical model can be computationally demanding since the number of observations is large. Hence, we use a composite likelihood approach which is a criterion function based on the likelihood of marginal events. In our data analysis, we demonstrate that our models resemble the empirical patterns. Moreover, we compare our models to the naive one, which does not take the spatial and temporal correlation in residuals into consideration. The root mean squared error is reduced by 27 percent when our approach is taken.

Lunch Crunch: Can Nutritious Be Affordable and Delicious?

Published electronically July 22, 2014
DOI: 10.1137/14S013123
M3 Challenge Introduction

Authors: Anne Lee, Irwin Li, Steven Liao, Zack Polizzi, and Jennifer Wu (NC School of Science and Mathematics, Durham, NC)
Sponsor: Daniel Teague (NC School of Science and Mathematics, Durham, NC)

Abstract: Along with educating children on proper nutrition, schools are also responsible for providing healthy daily lunches to students. Eating properly has been shown not only to improve health but also to enhance classroom performance and reduce incidences of disease. Recent legislative acts such as the Healthy, Hunger-Free Kids Act of 2010 have been instrumental in pushing schools to offer more nutritious menu options. However, there have been many obstacles in the implementation and success of the program. Students have shied away from new school lunches, arguing that the healthier options are not as tasty. School districts also struggle to cover the rising expense of more nutritious foods with stretched budgets.

Our job is to develop a mathematical model to optimize the affordability, taste, and nutrition of school meals. The first step in this process is to develop a method to calculate the calories a student needs at lunch. Current calorie calculators give recommended values for an entire day while our method returns a result that can be specified to lunchtime needs based on attributes such as amount of sleep, whether or not breakfast is eaten, time spent exercising in the morning and afternoon, when you eat lunch and dinner, frequency of snacking, and weight. Based on these parameters, we are able to calculate a person’s rate of calorie burn throughout various periods of a day and find the amount of calories they
would need to consume at lunchtime.

We then sought to determine what percentage of high school students are satisfied by standard school lunches given the calorie requirements calculated in the previous step. We ran our model on data from a sample of 11,458 high school students across the country in order to determine the distribution of lunchtime calorie needs. Given that a standard school lunch contains 850 calories, we then calculated the frequency at which the student’s caloric needs fell in a fixed range of values above or below 850 calories. We found that currently, only 37.9% of students would be satisfied. By analyzing the distribution our model returned, we found that a 1,050 calorie meal was a more optimum solution that would satisfy a greater number of students (49%).

Finally, we established an ideal lunch meal for schools to offer to students based upon the recommendations of USDA’s MyPlate and what we observed to be the optimal number of calories to consume in order to satisfy the maximum percentage of students. Using data on foods available to a school and survey results showing how well students accept various foods, we were able to create an algorithm that generates meals that meet the USDA MyPlate requirement, have the necessary number of calories, and fit the available school budget. We then ranked these meals based on student acceptance to find the best meal choices, which would be well received by students while still meeting dietary and budget requirements. Using localized survey results allows our model to be customized for schools in different geographical and socioeconomic situations.

Resolving the One-dimensional Autonomous Flow-free Explosion Problem

Published electronically July 25, 2014
DOI: 10.1137/14S013196

Author: James Thomas Murphy III (Carnegie Mellon University)
Sponsor: Gautam Iyer (Carnegie Mellon University) 

Abstract: We introduce the explosion problem for semilinear elliptic and parabolic PDEs. The explosion problem is a question of existence of nonnegative solutions to - = λg(χ, ø ) and nonnegative global solutions to ∂t ø -Lø= λg(χ, t, ø ). Under certain conditions there is a λ* > 0, called the explosion threshold, for which λ < λ* implies a solution exists, and λ > λ* implies no solution exists. In this paper we focus on the one-dimensional elliptic case. Our main result is the explicit calculation of the explosion threshold λ* for the one-dimensional autonomous flow-free problem –ø″ = λg(ø ).

Competitive Algorithms for an Online Rent or Buy Problem with Variable Demand

Published electronically September 8, 2014
DOI: 10.1137/14S013032

Author: Rohan Kodialam (High Technology High School, Marlboro, NJ)
Sponsor: T. V. Lakshman (Bell Labs, Alcatel-Lucent, Holmdel, NJ)

Abstract: We consider a generalization of the classical Ski Rental Problem motivated by applications in cloud computing. We develop deterministic and probabilistic online algorithms for rent/buy decision problems with time-varying demand. We show that these algorithms have competitive ratios of 2 and 1.582 respectively. We also further establish the optimality of these algorithms.

Water, Water Everywhere: Meeting the Demands of Saudi Arabia’s Water Needs

Published electronically September 15, 2014
DOI: 10.1137/14S012865

Authors: Aradhya Sood, Namgyal Angmo, and Yukiko Iwasaki (Colorado College, Colorado Springs, CO)
Sponsor: Andrea Bruder (Colorado College, Colorado Springs, CO)

Abstract: The objective of this paper is to introduce models to determine an effective, feasible, and cost-efficient strategy for Saudi Arabia's water supply system to meet its projected demand in 2025. This paper uses cost minimizing and production maximizing approaches to build the models. The water management system is divided into three processes{desalination, distribution, and wastewater treatment. For desalination and wastewater treatment aspects of the water supply, we use a Cobb-Douglas production maximization model. The model determines the optimal levels of different inputs that maximize the production of usable water, given the Saudi Arabia government's budget constraints. For the water distribution model, a cost function is minimized with the 2025 water demand constraint and is used to determine the optimal diameter of the pipes and hydraulic head. Using the desalination process model, the paper found that the optimal level of input of electricity is 4.9 billion kWh and the maximized water output is estimated at 3.3 billion m3. In addition, using the wastewater treatment model, the paper found that the optimal level of electricity input is 26.5 billion kWh annually and the maximized level of water production is 7.7 billion m3. The water distribution model estimates that, given the 2025 water demand in Saudi Arabia, the set-up and operating cost of the water distribution grid is approximately $68.88 million. The model also estimates that a minimum pipe diameter of 5.37 m and hydraulic head of 1186.14 m is required to meet the demand of three sectors{agriculture, industry, and domestic.

A Rotation Scheme for Accurately Computing Meteoroid Flux

Published electronically September 23, 2014
DOI: 10.1137/13S012686

Author: Anita Thomas (Illinois Institute of Technology, Chicago, IL)
Sponsors: Martin Ratliff (NASA Jet Propulsion Laboratory, Caltech, Pasadena, CA) and Shuwang Li (Illinois Institute of Technology, Chicago, IL)

Abstract: Accurately estimating meteoroid flux helps analyze the risk of collisions while spacecraft move in interplanetary space. Currently, NASA uses the Meteoroid Engineering Model (MEM) to estimate meteoroid flux and the Bumper-ii code to evaluate damage to a spacecraft structure. To help the Bumper algorithm evaluate potential damage, we develop a rotation scheme that converts meteoroid flux information from the reference frame used in the Meteoroid Engineering Model to Bumper's reference frame as a spacecraft rotates in space. Using randomly generated data, we show our average accuracy can be at least 98:6%.

A New Method for Approximating Logarithms with k-th Order

Published electronically October 21, 2014
DOI: 10.1137/14S013251

Author: William Y. X. Zou (Pierre Elliott Trudeau High School, Markham, Ontario, Canada)
Sponsor: Mahmood Pournasrola (Pierre Elliott Trudeau High School, Markham, Ontario, Canada)

Abstract: We provide an algorithm to approximate logarithms with k-th order convergence. In addition to fast convergence, an upper bound for the error is available within pre-defined levels.

Quantifying and Comparing Centrality Measures for Network Individuals as Applied to the Enron Corpus

Published electronically October 27, 2014
DOI: 10.1137/14S013202

Authors: Timothy Kaye, David Khatami, Daniel Metz, and Emily Proulx (Pomona College, Claremont, CA)
Sponsor: Jo Hardin (Pomona College, Claremont, CA)

Abstract: The ever increasing body of social networks creates an opportunity for extensive network analysis and investigations of communications, cliques, and network contributions. In this study, we focus our attention on the Enron email corpus and the corresponding network of employees, attempting to gather information from the email communications. Methods of data reduction on the email corpus were used to create a weighted adjacency matrix in which each i, j-entry corresponds to a weighted count of correspondences from employee to employee j. While there are many ways to measure importance within a corporate network, of which job title constitutes one such measure, our study focuses on five primary measures: eigenvector centrality, row-sums of a topological overlap matrix, closeness, betweenness, and Opsahl metric. These network analysis metrics were applied to the weighted adjacency matrix to calculate the centrality measures for each individual employee, which were subsequentlycompiled into ordinally ranked lists of employees for each centrality measure based on decreasing importance. Additionally, the centrality data was visualized using the Data-Driven Documents (D3) javascript library, allowing for network visualization in terms of department job title and number of emails sent.

In applying the centrality measures to network data, we explore the differences inherent in each measure and work to compare them as well as the corresponding employee importance rankings for each. The metrics in our analysis determined individual importance of employees by applying significant weight to various aspects of the employees’ network roles. By identifying employees that are connected to a large number of individuals and simultaneously have extensive correspondences with those individuals, the Opsahl score combines the other measures, proving to be the most useful metric in exploring Enron’s inner-corporate structure.

Coherent Structures in Scalar Feed-Forward Chains

Published electronically November 12, 2014
DOI: 10.1137/14S013263

Authors: Christopher Browne (Rensselaer Polytechnic Institute, Troy, NY) and Andrew L. Dickerson (University of Wisconsin-Madison, Madison, WI)
Sponsors: Arnd Scheel and Gregory Faye (University of Minnesota, Minneapolis, MN)

Abstract: We study semi-infinite and bi-infinite scalar feed-forward networks. We find that the temporal dynamics of these systems is closely linked to the spatial dynamics of an associated interval map and show how this interval map may be used to describe stationary interfaces. Beyond stationary structures, we show that the onset of instabilities in finite networks is intimately related to the emergence of frustrated invasion fronts. These concepts are then applied to several toy models, whose intracellular dynamics mimic the normal form of elementary steady-state bifurcations.

Fourfun: A New System for Automatic Computations using Fourier Expansions

Published electronically December 15, 2014
DOI: 10.1137/14S013238

Author: Kristyn McLeod (Arizona State University, Tucson, AZ)
Sponsor: Rodrigo Platte (Arizona State University, Tempe, AZ)

Abstract: Using object-oriented programming in MATLAB, a collection of functions, named Fourfun, has been created to allow quick and accurate approximations of periodic functions with Fourier expansions. To increase efficiency and reduce the number of computations of the Fourier transform, Fourfun automatically determines the number of nodes necessary for representations that are accurate to close to machine precision. Common MATLAB functions have been overloaded to keep the syntax of the Fourfun class as consistent as possible with the general MATLAB syntax. We show that the system can be used to efficiently solve differential equations. Comparisons with Chebfun, a similar system based on Chebyshev polynomial approximations, are provided.

Enter Title