SIURO | Volume 11 | SIAM
 

Header


SIAM Undergraduate Research Online

Volume 11


SIAM Undergraduate Research Online Volume 11

An Analysis of the Impact of Self-Driving Cars on Traffic Conditions

Published electronically January 19, 2018
DOI: 10.1137/17S015768

Authors: Vinit Ranjan, Junmo Ryang, and Kelly Zhang (Duke University)
Sponsor: David Kraines (Duke University)

Abstract: The goal of this paper was to assess the effect of self driving cars on traffic conditions in the Greater Seattle Area using data from the Consortium for Mathematics and Its Applications in the 2017 Mathematical Contest in Modeling. We built a computational, agent based, mathematical model by which we could vary parameters such as number of lanes and capacity of cars. After polling a sufficient sample space, we ran our model and used curve fitting techniques to create functions to model the system. We used the model to calculate average speeds for various highways in Seattle. After creating our model, we adapted the computational model to allow for self driving cars to show increase in average car speed in various conditions. Our results show the increase in average speed with an increasing percentage of self driving cars in terms of increased average speed. The advantages of our model are the agent based aspects, which allow us to observe and model the system's behavior. Using this data to interpolate surfaces allows for more analytic techniques as well. The computational model is also flexible enough to poll data for different traffic conditions in other cities.

A Generalization of the Minisum and Minimax Voting Methods

Published electronically January 23, 2018
DOI: 10.1137/16S014870

Author: Shankar Sivarajan (Indian Institute of Science)
Sponsor: Y. Narahari (Indian Institute of Science)

Abstract: In this paper, we propose a family of approval voting-schemes for electing committees based on the preferences of voters. In our schemes, we calculate the vector of distances of the possible committees from each of the ballots and, for a given p-norm, choose the one that minimizes the magnitude of the distance vector under that norm. The minisum and minimax methods suggested by previous authors and analyzed extensively in the literature naturally appear as special cases corresponding to p = 1 and p = 1; respectively. Supported by examples, we suggest that using a small value of p; such as 2 or 3, provides a good compromise between the minisum and minimax voting methods with regard to the weightage given to approvals and disapprovals. For large but finite p; our method reduces to finding the committee that covers the maximum number of voters, and this is far superior to the minimax method which is prone to ties. We also discuss extensions of our methods to ternary voting.

Rank and Score Aggregation Methods in Competitive Climbing

Published electronically January 24, 2018
DOI: 10.1137/17S01594X

Author: Kira Parker (University of Utah)
Sponsor: Braxton Osting (University of Utah)

Abstract: Ranking methods are used in all aspects of life, from Google searches to sports tournaments. Because all ranking methods necessarily have advantages and disadvantages, USA Climbing, the organizer of national climbing competitions in the United States, changed their ranking method three times between 2009 and 2016. The combined rank method employed in 2015 marked a drastic step away from the previous two in that it failed to meet the independence of irrelevant alternatives (IIA) criterion and was almost impossible for spectators to use to calculate ranks on their own. We compare this more recent rank aggregation method with older USA Climbing score aggregation methods as well as other methods from the literature. Three particularly important methods we consider are (i) the combined rank method, (ii) a combination of the previous two USA Climbing score aggregation methods (the merged method), and (iii) a linear programming (LP)-based rank aggregation method from the literature. Using data from the 2016 Bouldering Youth National Championships, we perform leave-one-out cross validation and the Friedman hypothesis test to conclude that at the 99% confidence level, the LP-based rank aggregation method has significantly more predictive power than the other two methods, while there was insufficient evidence to distinguish between the predictive power of the combined rank method and the merged method. However, due to the desirable properties, such as the IIA criterion, satisfied by the merged method, we recommend this method for use in competitive climbing.

An Alternating Minimization Method to Train Neural Network Models for Brain Wave Classification

Published electronically February 21, 2018
DOI: 10.1137/17S016439

Author: Grant Sheen (Sage Hill High School)
Sponsor: Knut Solna (UC Irvine)

Abstract: An alternating minimization (AM) method, which updates variables one-by-one while fixing the rest, is developed to train a neural network with low rank weights for brainwave classification. The training involves minimizing a non-smooth and non-convex cross entropy loss function. The neural network model does a projection inside a hidden layer for low dimensional feature extraction. The sub-problem for each variable is shown to be either convex or piece-wise convex with a finite number of minima. The sub-problems are solved using the bisection method with bisection intervals analytically derived. The overall iterative AM method is descending and convergent, free of step size (learning parameter) in the standard gradient descent method. Experiments consist of noninvasive multiple electrode recordings and the classification of brain wave data. The AM method significantly outperforms the standard neural network model trained by the gradient descent method in classifying four thoughts for both normal and Alzheimer subjects.

Modeling Tsunami Run-Up and Draw-Down on the Beach

Published electronically February 21, 2018
DOI: 10.1137/17S015793

Authors: William Noland (North Central College), Dylan Smith (University of Connecticut), and Seth Selken (Iowa State University)
Sponsor: Sergei Fomin (California State University, Chico)

Abstract: Previous literature has investigated the run-up and draw-down of tsunami waves on a one-dimensional, constant-sloped beach, but the existing solutions are complex and computationally unwieldy. Our research aims to establish a simpler model while still obtaining accurate results. We do so by using a quasi-linear theory derived from the nonlinear shallow-water wave equations. These equations are considered over a linear beach with properly imposed initial and boundary conditions. The main difficulty in solving this problem is the moving boundary associated with the shoreline motion. To eliminate this difficulty, we apply an appropriate substitution to the spatial variable, and thus replace the moving boundary of the computational domain with a stationary boundary. A key feature of our tsunami problem is the presence of the small parameter ε = η0/h0, where η0 is the characteristic amplitude of the wave and h0 is the characteristic depth of the ocean. Due to the presence of this small parameter, the problem can be essentially linearized using the method of perturbations and then solved analytically via an integral transformation. Our explicit solution enables us to swiftly predict the behavior of the wave using an essentially linear model. We test the accuracy of our model against the numerical solution obtained using Mathematica, and find minimal discrepancies. Finally, we extend our results to a modified beach configuration that more accurately reflects real-world shoreline topography.

Exploring Blood Flow Dynamics Underlying Pathogenesis of Splenic Artery Aneurysms

Published electronically March 5, 2018
DOI: 10.1137/17S015926

Author: Kirsten Giesbrecht (Centre College)
Sponsor: Ellen Swanson (Centre College)

Abstract: If an expectant mother develops a splenic artery aneurysm that ruptures, there is a 90% fetal mortality rate. Splenic artery aneurysms account for 46-60% of visceral aneurysms in the human abdomen. The cause for the propensity of aneurysms to develop and rupture along the splenic artery is unknown. A distinguishing characteristic of this artery is its tortuous shape. We investigate how the unique geometries of the artery may lead to unusual patterns in blood flow. Dramatic changes in blood flow properties such as blood pressure, velocity and wall shear are conducive for aneurysm formation, development, and rupture. Using Ansys Fluent computational fluid dynamics software, the influence of these elements of blood flow through both straight and curved arteries is explored. Under identical initial and boundary conditions, the curved artery reaches both higher dynamic pressure and wall shear stress values than the straight artery. The curved artery also has a greater range of dynamic pressure and wall shear stress values compared to the range of values in the straight artery. Additionally, blood flow changes associated with pregnancy, including increased peak velocity, are shown to increase the risk of aneurysm development and rupture. These results indicate that the curved geometry of the splenic artery predisposes it to promote aneurysm rupture and growth.

L1 Regularization for Compact Support

Published electronically March 14, 2018
DOI: 10.1137/17S015859

Author: Timothy Li (Carnegie Mellon University)
Sponsor: Giovanni Leoni (Carnegie Mellon University)

Abstract: In this paper, we solve a conjecture of Osher and Yin that L1 regularization of certain minimization problems ensures compact support of minimizers. In particular, we consider a class of minimization problems and show that adding an L1 term forces compact support of solutions given certain sufficient conditions namely that the given forcing function sufficiently decays toward -∞ and ∞. It turns out that the condition found is sharp.

A Two-Stage Vehicle Routing Algorithm Applied to Disaster Relief Logistics after the 2015 Nepal Earthquake

Published electronically March 15, 2018
DOI: 10.1137/17S016385

Author: Stephanie Allen (SUNY Geneseo)
Sponsor: Caroline Haddad (SUNY Geneseo)

Abstract: After the April 2015 Nepal Earthquake, the Himalayan Disaster Relief Volunteer Group distributed supplies to affected areas. We model the organization’s delivery of supplies as a vehicle routing problem using Fisher and Jaikumar’s two-stage method, which allocates locations to vehicles via an integer program and then uses heuristics to route the vehicles. In the allocation stage, we use the assignment problem formulation to assign locations to vehicles. In the routing stage, we implement the greedy, sub-tour reversal, and simulated annealing heuristics for the sake of comparison. Our results illustrate the necessity of heuristics and the power that comes from using multiple heuristics for the vehicle routing problem.

Calibration of the Ross Recovery Theorem to Real-world Data, and Tests of its Practical Value

Published electronically March 19, 2018
DOI: 10.1137/17S016440

Authors: Ling Lan and Zhengxu Li (Courant Institute of Mathematical Sciences)
Sponsor: Robert Kohn (Courant Institute of Mathematical Sciences)

Abstract: The Ross Recovery Theorem [1] challenges the commonly held thought that derivative prices do not contain useful predictive information about the distribution of financial variables (such as an equity index). The theorem recovers, under certain hypotheses on the character of the market, the subjective probability distribution of an equity index from current derivative prices. In this paper, building on the method of Backwell [2] for extracting state prices from option prices, we develop a strategy for combining option data with the Recovery Theorem to estimate the subjective distribution. Using real-world data, we then investigate whether the Recovery Theorem yields predictive information and has practical value, concluding that the answer might be no.

Increasing Efficiency for United Way's Free Tax Campaign

Published electronically March 23, 2018
DOI: 10.1137/17S015811

Authors: Irena Chen, Jessica Fay, and Melissa Stadt (University of Washington)
Sponsor: Sara Billey (University of Washington)

Abstract: United Way of King County's Free Tax Campaign offers free tax return preparation for low-income residents in the greater Seattle area. For many years, United Way has been using the same work ow model, but is now considering switching to a new strategy that could potentially serve more clients. In order to evaluate the which work flow would be more effective, we created Monte Carlo simulations to predict the maximum number of clients a given tax site could serve, given varying numbers of personnel and staff. By analyzing results from both simulations, we concluded it would be beneficial for United Way to change to the proposed new service model. As the Free Tax Campaign grows, the new service model will continue to outperform the previous model. These simulations can also be used to assist in allocating staff across tax sites, in order to maximize the number of people the Free Tax Campaign can serve.

Linear Extensions of a Partial Order Subject to Algebraic Constraints

Published electronically March 28, 2018
DOI: 10.1137/17S015872

Authors: Zane Huttinga (Montana State University)
Sponsor: Bree Cummins and Tomáš Gedeon (Montana State University) 

Abstract: We present a novel combinatorial problem that arises from mathematical biology. In order to understand the dynamics of models of gene regulatory networks over a parameter space, a problem of constructing linear extensions of a partial order with algebraic constraints arises naturally. We formulate the problem for a class of algebraic constraints related to the form of nonlinearities in the gene regulation model. We provide an algorithm that partially solves the problem. We formulate a conjecture on the special role of additive constraints in the class of all considered constraints. We present several examples where we show that the number of solutions is much smaller than the number of unconstrained linear extensions.

Investigating the Gerchberg-Saxton Phase Retrieval Algorithm

Published electronically May 2, 2018
DOI: 10.1137/17S016610

Authors: Lily Wittle (University of Miami) and Theresa Thimons (Saint Vincent College)
Sponsor: Comlan de Souza (California State University, Fresno)

Abstract: Phase information of light and electron waves cannot be physically measured, but it can be estimated. Physicists R. W. Gerchberg and W. O. Saxton wrote an algorithm that estimates the phases of these waves when only their amplitudes are known. We sought to improve this algorithm. We observed the algorithm's performance by experimentation using a numerical implementation of the algorithm that we wrote in MATLAB. We found that functions of the form f x g, where g is a Gaussian function, have better success than those of the corresponding f. We also found that a constant initial phase estimate produces more consistent and efficient results for non-centrosymmetric input than the random initial phase estimate used in the original algorithm. Our paper includes a proof of error convergence, description of the implementation of a low-frequency filter, and full MATLAB scripts.

Wronskian representations of hypergeometric integrals

Published electronically June 7, 2018
DOI: 10.1137/17S016634

Author: Anthony Zuccolo (Indiana University Northwest)
Sponsor: Axel Schulze-Halberg (Indiana University Northwest)

Abstract: We construct new representations of hypergeometric integrals in terms of Kampé de Fériet functions. Our method is based on recently developed integral formulas that allow to express certain integrals through Wronskians.

Implementation of Gibbs Sampling within Bayesian Inference and its Applications in Actuarial Science

Published electronically June 13, 2018
DOI: 10.1137/17S016609

Author: Colton Gearhart (Northern Kentucky University)
Sponsor: Dhanuja Kasturiratna (Northern Kentucky University)

Abstract: This paper discusses how a variety of actuarial models can be implemented and analyzed with a Bayesian approach using Gibbs sampling, a Markov chain Monte Carlo method. This approach allows a practitioner to analyze complicated actuarial models by reducing them to simpler, more manageable models. Furthermore, general properties of Gibbs sampling are discussed through a simulation approach.

Resilient Jammed Packing: A Novel Feature of a Classic Geometry Problem

Published electronically June 26, 2018
DOI: 10.1137/18S016667

Author: Steven Ma (Greenwich High School)
Sponsor: Qiang Du (Columbia University)

Abstract: This paper introduces a novel packing feature called resiliency. A sphere packing is considered resilient if the packing remains jammed even after several spheres are removed. Resilient jammed packings have various applications, such as in shipping, where a resilient jammed packing ensures the safety of the item being shipped even if some of the packing material breaks. In 2D, we prove that (1) a minimum of two disks must be removed to unjam the hexagonal packing, and (2) any other lattice packing can be unjammed after one disk is removed. In 3D, we prove that (1) a minimum of three spheres must be removed to unjam the face-centered cubic packing, and (2) any other lattice packing can be unjammed by removing at most two spheres. These results imply that the hexagonal packing is the most resilient 2D lattice packing and the face-centered cubic packing is the most resilient 3D lattice packing.

A Robust Optimization Approach for Selecting Urban Fire Engine Company Locations

Published electronically August 20, 2018
DOI: 10.1137/17S016397

Author: Vivek Gopalan (Pennsylvania State University)
Sponsor: Murali Kodialam (Nokia Bell Labs)

Abstract: When a fire breaks out in a city, an emergency call to 911 is made and a fire engine company responds to the incident. This average response time is an important system metric that must be kept under a suitable threshold. The goal of this project is to study the relationship between the number and locations of fire engine companies and their response times to help cities optimize resources. Since the number and locations of fires are not known apriori, any selected set of engine company locations must be robust across a wide spectrum of fire incidents. Furthermore, if too much emphasis is placed on optimizing resources and system costs, some engine companies could bear a disproportionate fraction of the workload, leading to dissatisfaction or fatigue among firefighters. This project therefore also considers the incremental cost of ensuring equitable engine company workloads. A methodology combining integer programming techniques, ensemble learning, and local search using genetic algorithms is proposed to determine robust locations for the fire engine companies. Tested with a data set for the city of Philadelphia, the results support the hypothesis that optimizing fire engine company locations can result in significant savings for resource-strapped cities.

Toxoplasma gondii: A mathematical model of its transfer between cats and the environment

Published electronically August 29, 2018
DOI: 10.1137/17S016580

Author: Emily Kelting (University of Central Oklahoma)
Sponsor: Brittany Bannish (University of Central Oklahoma)

Abstract: We discuss the transmission of Toxoplasma gondii and its effects in a cat population. Due to the severity of T. gondii spillover infections in pregnant women and monk seals, understanding its dynamics in cats is key to unlocking preventative measures against this parasite. Taking into account susceptible, acutely infectious, and chronically infectious cats, and the surrounding environment, we build a differential equations model of T. gondii transmission in cats. We present our model and the results, identifying conditions under which infection persists. Specifically, we find that higher rates of infection and larger shedding rates facilitate endemic infection. We conclude by discussing how this information can be used to minimize risks to other species.

A Multi-Dam System Design for Zambezi River

Published electronically September 6, 2018
DOI: 10.1137/18S016680

Authors: Yue Sun, Qian Xu, and Qizhi Cai (Nanjing University)
Sponsor: Hongjun Fan (Nanjing University)

Abstract: The Kariba Dam on the Zambezi River is confronted with foundation errosion and control limitation currently, so a strategy to maintain water management on the Zambezi River is of great value for the whole Zambezi basin. Based on the detailed data provided by World Bank, we conduct the assessment of three options (repairing the dam, rebuilding the dam or replacing the dam with several smaller dams along the river). According to the assessment, we strongly recommend to replace the dam with several smaller dams and provide a reasonable and detailed model to construct a new dam system with some emergency strategies. During the modeling, a Hydrodynamic Model (HM) of the Zambezi River is established in order to estimate the owing and depth situation of the river by numerical simulation, which functions as the reference for the site selection and number confirmation of the new dams. And then, a Difference Model (DM) of the Zambezi River with a dam system is established to provide sufficient details for the confirmation of dam construction and modulating strategies. Considering the loss of energy in collision, tributaries and the runoff variation with rainfall at headstream and tributaries, DM is corrected to adapt to the reality, so that the type of each dam can be confirmed and the strategies to handle such emergency water ow situations as flooding and drought are also designed. It is validated that the new dam system can handle most extreme conditions considering reasonable errors.

Using Data Assimilation to Better Predict Contaminant Transport in Fluids

Published electronically September 20, 2018
DOI: 10.1137/18S017259

Authors: Jacob Honeycutt, Hannah Johnson, and Sarah Kelly (Clemson University)
Sponsor: Leo Rebholz (Clemson University)

Abstract: We study data assimilation in fluid transport models for contaminant transport, using the recently proposed nudging technique proposed by Titi et al. in 2014 [2]. We prove, under the assumption of a sufficiently fine mesh and appropriately chosen nudging parameter, that for any initial condition, the data assimilated model solution is long-time stable and will converge to the true solution exponentially fast in time. We prove this first at the PDE level, and then also for a backward Euler - finite element discretization. Results of a numerical test are also given that illustrate our theory and show the effectiveness of the proposed method.

Better ATE Than Never: A Mathematical Analysis of Food Waste Production and Redistribution

Published electronically September 27, 2018
DOI: 10.1137/18S01720X
M3 Challenge Introduction

Authors: Michael Vronsky, Ryan Huang, Daniel Wang, Justin Ju, and Joanne Yuan (Los Altos High School, Los Altos, CA)
Sponsor: Carol Evans (Los Altos High School, Los Altos, CA)

Abstract: Food insecurity affects millions of individuals in the United States. We develop a model to address food insecurity by repurposing food waste and apply this methodology to Texas. We also develop models to analyze food waste habits of consumers and optimal distribution strategies for food banks. We first consider whether a state wastes enough food to feed its food insecure population, regardless of distribution methods. We determine that there is not enough to feed the food insecure population of Texas assuming each person needs 2,000 kcal per day. To establish a food consumption baseline for different demographics in the United States, we use government data to find the average number of calories needed per day by age, gender, and activity level. Then, to determine how income affects what food types households eat, we use a nonlinear model fit to predict the proportion of income spent on given food types based on annual income. This allows us to calculate how many pounds of food are wasted for any given household. Finally, we analyze three potential food distribution strategies, including fixed and mobile distribution centers. A key feature of our model is its extensibility and the use of computer simulation to model consumers as rational agents. Of the models tested, we find a model with multiple fixed distribution centers to be the most effective in the long-run (after 4.8 years). The paper was originally a submission to the MathWorks Math Modeling Challenge in 2018.

Forecasting Algal Bloom Lags and Stability in a Watershed

Published electronically October 10, 2018
DOI: 10.1137/18S016643

Authors: Mackenzie Jones (The University of Akron), Alison Frideres (Simpson College), and Bailey Kramer (University of Wisconsin-Stout)
Sponsor: Keith Wojciechowski (University of Wisconsin-Stout)

Abstract: Near Menomonie, Wisconsin the lakes suffer from algae blooms during the warm summer months. Mathematical models describing the cyanobacteria population dynamics are studied with the intent of analyzing the conditions under which the populations grow and stabilize. Two models are considered, one for forecasting the population as the lake turns toxic from excess biomass after a flushing event occurs, and the other for establishing an algal bloom stability condition. These models are proposed for consideration to test the effectiveness of solutions put forth to ameliorate the algal bloom problem.

Convergence of the Randomized Block Gauss-Seidel Method

Published electronically November 8, 2018
DOI: 10.1137/17S015860

Author: Wei (Maggie) Wu (Scripps College)
Sponsor: Deanna Needell (University of California, Los Angeles)

Abstract: The Randomized Gauss-Seidel Method (RGS) is an iterative algorithm that solves large-scale systems of linear equations Ax Æ b. This paper studies a block version of the RGS method, the Randomized Block Gauss-Seidel Method (RBGS). At each step, the algorithm greedily minimizes the objective function L(x) Æ kAx¡bk22 with respect to a subset of coordinates. This paper describes the RBGS method, which uses a randomized control method to choose a subset of columns of A at each step. We show this method exhibits an expected linear convergence rate that can be described by the properties of the matrix A and its column submatrices. The analysis demonstrates that convergence of RBGS improves upon RGS when given an appropriate column-paving of the matrix, a partition of the columns into well-conditioned blocks. The main result yields a RBGS method that is more efficient than the classical RGS method, and bridges existing theory on the related block Kaczmarz method and the convergence analysis of classical RGS.

Comparing Language Use and Network Structure Using Twitter

Published electronically November 26, 2018
DOI: 10.1137/18S016655

Authors: Caitlin Shener, Brian Oceguera, and Sally Lee (UCLA)
Sponsor: Mason Porter (UCLA)

Abstract: We develop an approach for exploring questions regarding language use and connections between people in social networks. In particular, we investigate community structure and language usage in the network composed of the most followed ninety-nine users in Twitter. Our goal is to measure the relation between a community of users and the words employed by those users. We accomplish the investigation using k-means clustering to group users based on word choice, and we use modularity maximization and InfoMap clustering to find communities based on network connections. Our study illustrates how to mathematically analyze and interpret language use within social network structure.

Optimal Net Energy and Foraging Behavior of Grus americana in Aransas National Wildlife Refuge

Published electronically November 30, 2018
DOI: 10.1137/18S017181

Authors: Kristen Lawler (Marist College), and Christian Schmidt (The College of Saint Scholastica)
Sponsor: John Alford (Sam Houston State University)

Abstract: Whooping cranes (Grus americana) became endangered in the late 1800s when population numbers dwindled to as low as only 15 wild birds. Today, the wild population has significantly rebounded, but remains threatened. The last surviving migratory flock of whooping cranes travels to Texas every year for the winter. When the cranes arrive they primarily forage for wolfberries, but as the berry population decreases eventually the cranes' diet switches to one of mainly blue crabs. Understanding how abiotic conditions such as drought and water level fluctuations affect these food resources is important to wildlife managers who are tasked with the protection of the cranes. In this paper, we formulate a dynamic model that predicts the net energy intake of the whooping crane with parameters that may be used to control both the abiotic and biotic characteristics of the ecosystem inhabited during winter. We optimize the model to establish the maximum net energy intake of the crane over one season in ANWR and determine the point at which a crane will switch from foraging for berries to foraging for blue crab. This switching point gives insight to the cranes' foraging behavior, allowing us to have a better idea of what steps will be most effective moving forward if human action is needed for conservation of the species. Furthermore, the complete development and analysis within this paper will aid in determining whether or not sufficient resources exist in a crane's territory for it to sustain the winter and prepare for a successful flight back to breeding grounds.

Using a Density Dependent Population Model to Examine Sperm Whale Population Dynamics in the Gulf of Mexico Before and After an Environmental Disturbance

Published electronically November 30, 2018
DOI: 10.1137/18S01717X

Author: Mark Dibbs (University of Louisiana-Lafayette)
Sponsor: Amy Veprauskas (University of Louisiana-Lafayette)

Abstract: In this paper, we develop a density dependent model to describe sperm whale population dynamics in the Gulf of Mexico. For this model, we consider the stability of the extinction equilibrium and prove the existence and uniqueness of a positive equilibrium. We then examine the stability of the positive equilibrium and substantiate the results with numerical simulations using Matlab. Next, we consider the effect of a disturbance, such as the Deepwater Horizon oil spill, on the sperm whale population in the Gulf of Mexico. Describing a disturbance as an event that results in reductions in survival rates for a certain period of time, we examine the recovery time of a sperm whale population following a disturbance, which we define to be the length of time it takes the population to return to a certain percentage of its asymptotic equilibrium value. Through testing various recovery threshold values, reductions in survival rates, and lengths of time over which the survival rates are reduced, we find that the recovery time is sensitive to each of these variables; depending on the values of these three quantities, the recovery time can last anywhere between a few years and many centuries.

A spectral element method for meshes with skinny elements

Published electronically December 6, 2018
DOI: 10.1137/18S017053

Author: Aaron Yeiser (MIT)
Sponsor: Alex Townsend (Cornell University)

Abstract: When numerically solving partial differential equations (PDEs), the first step is often to discretize the geometry using a mesh and to solve a corresponding discretization of the PDE. Standard finite and spectral element methods require that the underlying mesh has no skinny elements for numerical stability. Here, we present a novel spectral element method, that is numerically stable on meshes that contain skinny quadrilateral elements, while also allowing for high degree polynomials on each element. Our method is particularly useful for PDEs for which anisotropic mesh elements are beneficial and we demonstrate it with a Navier-Stokes simulation. This method is presented without analysis and demonstrated using with Laplace-type operators. Code for our method can be found at https://github.com/ay2718/spectral-pde-solver.

Median Filtering by Threshold Decomposition: Induction Proof

Published electronically December 19, 2018
DOI: 10.1137/18S017120

Authors: Connor Bramham, Brandon Mayle, Maura Gallagher, Douglas Magruder, and Cooper Reep (Socrates Prep)
Sponsor: Neal Gallagher (Socrates Prep)

Abstract: In building a robot for the FTC competition, our team needed to remove motor noise from our sensor signals. So we settled on using a median filter because of the medians superior removal of impulsive noise. For us, however, the foundational publications that describe these filters were challenging to understand. Having learned the concept of proof by induction from the MIT OpenCourseWare course, “Mathematics for Computer Science” (MIT Course Number 6.042J / 18.062J), we developed an original proof for the principle of median filter threshold decomposition in order to better understand their operation. The induction is over the number of quantized threshold levels for the sequence of input values as applied to both the standard and recursive median filter.