SIAM Undergraduate Research Online (SIURO)
Click on title to view PDF of paper or right click to download paper.
Undergraduate Student Research
Estimation of Unmodeled Gravitational Wave Transients with Spline Regression and Particle Swarm Optimization
Published electronically January 26, 2015
Author: Calvin Leung (Harvey Mudd College, Claremont, CA)
Sponsor: Soumya Mohanty (University of Texas, Brownsville, Brownsville, TX)
Abstract: Detecting and estimating unmodeled transient gravitational wave (GW) signals in noisy data is a major challenge in GW data analysis. This paper explores a solution that combines spline based regression with Particle Swarm Optimization for knot placement and directional parameter estimation. First, the estimation of binary black hole merger signals in data from a single GW detector is used as a testbed problem to quantify the relative performance of several algorithmic design choices. The method resulting from this study is then adapted to the case of data from a network of geographically distributed GW detectors. Simulation results show fairly good directional estimates for black hole mergers, with reasonable fidelity in the reconstruction of both GW polarization wave-forms, at a signal to noise ratio capped at 15 for any single detector in the network. This promising performance suggests that the method should be developed further and applied to other types of GW transients.
Published electronically January 30, 2015
Authors: Melissa Jay, Venkatasai Ganesh Karapakula, and Emma Krakoff (Colorado College, Colorado Springs, CO)
Sponsor: Amelia Taylor (Colorado College, Colorado Springs, CO)
Abstract: We develop a mathematical model that determines the \best all-time college coach(es)" of the past century in a given sport. We propose ranking college coaches through Markov chain-based aggregation of ranked lists using holistic criteria. Our model synthesizes four full or partial ranked lists based on win percentages, victories, career durations, and effort levels to produce the final comprehensive rankings. As a demonstration, we determine that Ron Mason, Augie Garrido, and Gus Donoghue are the top all-time college coaches of the past century in NCAA Division I men's ice hockey, baseball, and men's soccer, respectively. Our general model is applicable not only across all possible sports but also to both male and female coaches. Additionally, it accounts for differences among coaches in their coaching time-periods.
Published electronically February 2, 2015
Author: George Hou (Arcadia High School, Arcadia, CA)
Sponsor: Jack Xin (University of California at Irvine, Irvine, CA)
Abstract: In this paper, we propose and analyze a class of blind source separation (BSS) methods to recover mixed signals in a noisy environment. Blind source separation aims at recovering source signals from their mixtures without detailed knowledge of the mixing process. Motivated by the work presented in , we propose a new optimization method based on second order statistics that considers the impact of Gaussian noise. By treating the Gaussian noise as a separate source signal and using an extra measurement of the mixed signals, we formulate the source separation problem as a global optimization problem that minimizes the cross-correlation of the recovered signals. In the case when the cross-correlation of the source signals is exactly zero, we give precise solvability conditions and prove that our global optimization method gives an exact recovery of the original source signals up to a scaling and permutation. In the case when the cross-correlation is small but nonzero, we perform stability and error analysis to show that the global optimization method still gives an accurate recovery with a small error. We also analyze the solvability for the two-signal case when the mixing matrix is degenerate. To the best of our knowledge, this is the first error analysis of BSS methods. The numerical results using realistic signals confirm our theoretical findings and demonstrate the robustness and accuracy of our methods.
Published electronically February 2, 2015
Author: Rebekah Coggin (Calvin College, Lawrence, KS)
Sponsor: Todd Kapitula (Calvin College, Grand Rapids, MI)
Abstract: This paper presents a method of numerically computing zeros of an analytic function for the specific application of computing eigenvalues of the Sturm-Liouville problem. The Sturm-Liouville problem is an infinite dimensional eigenvalue problem that often arises in solving partial differential equations, including the heat and wave equations. To compute eigenvalues of the Sturm-Liouville problem, we construct the Evans function, whose zeros correspond to eigenvalues of the Sturm-Liouville problem. Our method requires defining a contour integral based on an rough approximation of the zero. To apply this method to find zeros of the Evans function, we make rough approximates of zeros by a finite difference calculation for eigenvalues of the Sturm-Liouville problem. For cases where the exact zeros are known, we do a comparison to find that the numerical method in this paper has an error as small as O(10-16).
A New Look at the St. Petersburg Paradox
Published electronically February 3, 2015
Author: Eden Foley (Northern Kentucky University, Heighland Heights, KY)
Sponsor: Dhanuja Kasturiratna (Northern Kentucky University, Heighland Heights, KY)
Abstract: The infinite expected value of the St. Petersburg Paradox has been a source of contention within probability theory since its inception in the early 18th century. This work has aimed to avoid supposition and instead has chosen to focus on empirical evidence generated through simulation. Armed with sufficient evidence, this work has modeled the sampling distribution of the St. Petersburg Paradox’s mean. This model allows a prospective gambler or casino owner to know whether to partake in the game at a given price. In addition, the resulting model has been discovered to be highly adaptable to other similar distributions. Potential applications of this work to earthquake magnitudes is also discussed.
Published electronically March 18, 2015
Author: Matthew McCurdy (Centre College, Danville, KY)
Sponsor: Ellen Swanson (Centre College, Danville, KY)
Abstract: In this paper, we explore fluid flow caused by the presence of an insoluble surfactant on a thin, incompressible power-law fluid over a horizontal substrate. The gradient in surface tension caused by the surfactant results in fluid flow away from the region where the surfactant was deposited. Work has been conducted with Newtonian fluids and surfactants; however, the extensive effect surfactants have on non-Newtonian fluids has not been studied as thoroughly. Using the lubrication approximation, we derive a system of coupled nonlinear partial differential equations (PDE) governing the evolution of the height of the fluid and the spreading of the surfactant. We also numerically simulate our system with a finite difference method and vary the power-law index to explore differences in profiles of shear-thickening and shear-thinning fluids. Next, we find significant agreement between our results and previous studies involving Newtonian fluids with power-law relations. Finally, we determine similarity scalings and solutions around the leading edge of the surfactant, which describe the behavior of the fluid and surfactant towards the region of the fluid where the surfactant ends.
Published electronically March 25, 2015
Author: Jiechen Chen (University at Buffalo, State University of New York, Buffalo, NY)
Sponsor: Gino Biondini (University at Buffalo, State University of New York, Buffalo, NY)
Abstract: Infectious diseases that are spread through human contact can progress very rapidly in a population. One of the key factors in the spreading of contagion, and a main concern in attempting to stop the spread of illness, is the particular configuration of links among individuals in local communities within the larger population. This study uses a detailed individual-based, three-partite model comprising about 245,000 individuals located in an urban area in the Northeastern United States. Interactions among individuals are divided into family, workplace and pastime (service places, shopping, etc.), each occurring during a separate time period (daytime, pastime, and nighttime). Thus, the network allows one to model the spatial and temporal heterogeneity in the transmission of communicable diseases and to capture the differences between various individuals’ vulnerability to infection. We performed Monte-Carlo simulations of the spreading of influenza through this network. Simulation results correspond well to the reported epidemic information. Results also demonstrate a temporal and population threshold which if exceeded, result in the long-term spread of infection. We expect that the findings will offer a valuable platform to devise spatially and temporally oriented control and intervention strategies for communicable diseases.
Published electronically April 14, 2015
Author: Tahseen Rabbani (University of Virginia, Charlottesville, VA)
Sponsor: James Davis (University of Richmond, Richmond, VA)
Abstract: In 2011, Samsung Electronics Co. filed a complaint against Apple Inc. for alleged infringement of patents described in US 7706348, which details several embodiments of a TFCI (Transport Format Combination Indicator) encoder for mobile communication systems. One of the primary embodiments in question was a [30, 10, 10] non-cyclic code which was implemented in many devices communicating on the 3-G network, including several Apple products. However, the derivation of the basis for this code is left rather vague in the patent documentation. In this paper, the explicit construction of a [30, 10, 10] cyclic code is detailed using methods described by F.J. MacWilliams and N.J.A. Sloane in their well-known text, "The Theory of Error-Correcting Codes." We also give a construction of an optimal [30,10,11] non-cyclic code, which is distinct from the conventional and well-known construction involving manipulations of an extended BCH code.
Numerical Computation of Wave Breaking Times
Published electronically April 22, 2015
Author: Ravi Shankar (California State University, Chico, CA)
Sponsor: Sergei Fomin (California State University, Chico, CA)
Abstract: The time of nonlinear wave collapse is computed numerically for the Hopf equation. Previous numerical criteria for locating the time of wave collapse are either computationally prohibitive to implement or give erroneous results. A new criterion for this purpose is developed analytically using asymptotic analysis of the wave shock development shortly after breaking. The criterion defines the wave breaking time as the onset of energy dissipation. This onset results in a singularity in the third derivative of the energy function, the location of which yields the breaking time. A numerical criterion is formulated from this analytical result and tested against the exact analytical value of the breaking time. This is done by first solving the differential equation with a finite difference method. Then, to compensate for numerical error, a moving average method is developed to refine the energy data. The obtained results give visible convergence to the analytical breaking time as the numerical mesh is refined.
Published electronically June 9, 2015
Authors: Oluwapelumi Adenikinju (University of Maryland, Baltimore County), Julian Gilyard (Wake Forest University), Joshua Massey (University of Maryland, Baltimore County), Thomas Stitt (Pennsylvania State University)
Sponsor: Matthias K. Gobbert (University of Maryland, Baltimore County)
Abstract: We investigate the parallel solutions to linear systems with the application focus as the global illumination problem in computer graphics. An existing CPU serial implementation using the radiosity method is given as the performance baseline where a scene and corresponding form-factor coefficients are provided. The initial computational radiosity solver uses the basic Jacobi method with a fixed iteration count as an iterative approach to solving the radiosity linear system. We add the option of using the modern BiCG-STAB method with the aim of reduced runtime for complex problems. It is found that for the test scenes used, the problem complexity was not great enough to take advantage of mathematical reformulation through BiCG-STAB. Single-node parallelization techniques are implemented through OpenMP-based multi-threading, GPU-offloading using CUDA, and hybrid multi-threading/GPU offloading. It is seen that in general OpenMP is optimal by requiring no expensive memory transfers. Finally, we investigate two storage schemes of the system to determine whether storage through arrays of structures or structures of arrays results in better performance. We find that the usage of arrays of structures in conjunction with OpenMP results in the best performance except for small scene sizes, where CUDA shows the minimal runtime.
Published electronically June 29, 2015
Author: David Wolfe (St. Francis University, Loretto, PA)
Sponsor: Ying Li (St. Francis University, Loretto, PA)
Abstract: Acid mine drainage (AMD) is the outflow of acidic water from metal mines or coal mines. When exposed to air and water, metal sulfides from the deposits of the mines are oxidized and produce acid, metal ions and sulfate, which lower the pH value of the water. An open limestone channel (OLC) is a passive and low cost way to neutralize AMD. A mathematical model has been created to numerically determine the change in pH of the water and the concentrations of species from the dissolution of calcium on the surface of the limestone into the acidic water. The model is used to predict the conditions in which a OLC would be an effective solution for AMD. Effective ranges are determined for the concentrations of calcium and iron, as well as the temperature and velocity of the water.
Quantifying Option Implications
Published electronically July 1, 2015
Authors: Michael Bauer (Clarion University of Pennsylvania), Xiaowen Chang (University of Illinois at Urbana-Champaign), and Michael Conway (Emory University)
Sponsor: Qin Lu (Lafayette College)
Abstract: We introduce relevant financial concepts and describe how mathematical tools can be used to extract information about the market’s expectations and risk preferences from daily, observable options market prices on the S&P 500 index. This information takes the form of a probability density function, known as the Risk-Neutral Density (RND). Assuming no prior knowledge, we introduce our major tools, including splines and the Generalized Extreme Value (GEV) Distributions, and show how they can be used in a financial context. Finally, we illustrate some of the applications of the RND.
M3 Challenge Introduction
M3 Challenge Problem 2015
Published electronically July 7, 2015
Authors: Michael An, Guy Blanc, Evan Liang, Sandeep Silwal, and Jenny Wang (North Carolina School of Science and Mathematics, Durham, NC)
Sponsor: Daniel Teague (North Carolina School of Science and Mathematics, Durham, NC)
Summary: Senior year is often the most anticipated year for high school students nationwide. The independence and freedom that come with graduation leads into an exciting time for change. Perhaps the most important decision is deciding whether or not to apply for college, which has not only a growing price tag but also a hefty opportunity cost – the forgone income from getting a job straight out of high school. For many families, college is a burden, especially because college sticker prices can be misleading. In addition to the question of whether attending college is worth it or not, students are forced to consider the pros and cons of different career fields. STEM industries are growing rapidly and are touted by the media to have greater financial return and higher job stability. How can high school students make the right decisions that will ensure that they reach their targeted quality of life in the future?
Our job was to develop a mathematical model that can be used as a tool to help students evaluate different higher education choices, such as STEM vs. non-STEM majors and 4-year degrees vs. 2-year associate degrees. The first step in this process is to give a more accurate summary of how much attending college would cost. The current method of determining this value is by using the EFC (Expected Family Contribution) value. However, the EFC does not account for the amount of loans one would be expected to pay back or the yearly increases in the cost of college. Our college cost metric accounts for both of these factors, as well as different kinds of higher education plans and forgone working time. Surprisingly, for most students, this loss in the form of monetary wages, approximately $15,750 per year, is the largest cost of attending college. Based on this information, a student and his/her family can decide whether the student should pursue a degree, and if so, what type of degree.
We then sought to create a model that evaluates the costs and rewards of pursuing a STEM degree as compared to other higher education choices. We created a simulation that measured the amount of money that students with different degrees would earn, taking into account factors such as unemployment and inflation. We observed that STEM degrees generally yield higher returns than non-STEM and associate degrees, and all three tend to earn more money than a high school diploma.
Finally, we devised a tool that could help students determine what field of higher education they should enter, if they do decide to enroll in post-secondary education. The tool considers not only a student's personal career field preference but also job satisfaction factors such as level of responsibility, opportunity of advancement, location and contribution to society. Therefore, this important life decision will be based not only on personal interests or monetary compensation, but also on other important but oft-forgotten factors that might affect the quality of life.
Our model accurately evaluates various objective criteria, and provides means by which a student can incorporate personal preference for degree options; however, higher education is ultimately a very personal decision, and some students may opt away no matter how financially beneficial it is.
Published electronically July 14, 2015
Author: Hayley Tomkins (Dalhousie University)
Sponsor: Theodore Kolokolnikov (Dalhousie University)
Abstract: We consider a particle predator-swarm model introduced in . In the continuum limit of many prey particles, we develop a numerical method which tracks the boundary of the swarm. We use this method to explore the variety and complexity of swarm shapes. We also consider a special limiting case where the predator is moving inside an infinite sea of prey. Two subcases are studied: one where the predator is moving along a straight line and another where the predator is moving in a circle. We observe various topological changes in the swarm shape as the predator speed increases, such as the appearance of an infinite tail for a predator moving in a straight line when its speed is large enough.
Published electronically August 26, 2015
Author: Thomas Wester (United States Naval Academy)
Sponsor: Sonia Garcia (United States Naval Academy)
Abstract: Ebola is known to evade detection by the immune system during infection. In this paper, we use mathematical modeling as a tool to investigate and analyze the immune system dynamics in the presence of Ebola virus infection. The resulting model is a system of non-linear ordinary differential equations derived from known biological dynamics and a few biologically reasonable assumptions. In this paper, we prove existence and uniqueness as well as positivity and boundedness of the solutions to the differential equations. In addition, we derive the viral and immune reproduction numbers, and analyze the local asymptotic stability of the differential equation model. Furthermore, we run numerical simulations to illustrate the impact the variation of the parameters has on the behavior of the system. The analysis we develop provides thresholds for both determining the persistence and elimination of Ebola virus from the immune system, and represents the known biological dynamics of Ebola virus infection.
Transitions in a Metastable Neuronal Network
Published electronically November 3, 2015
Author: Anthony Trubiano (Rensselaer Polytechnic Institute)
Sponsor: Peter Kramer (Rensselaer Polytechnic Institute)
Abstract: Recent experimentation has found that while newborn rats follow an exponential distribution for wake bout distributions, older rats follow a power law distribution. Understanding possible explanations for this phenomenon requires understanding how transitions occur in metastable systems. Here we review some useful mathematical results and terminology in relation to networks and explore how several different network structures affect the dynamics of this sleep-wake system. We also employ several methods to identify possible transition mechanisms to help uncover biological and mathematical reasons for the change in bout distributions. Studying each mechanism, we relate the most prevalent transition mechanism for a system with a given network architecture to the type of bout distribution observed and correlate power law behavior to a gradual degradation mechanism and exponential behavior to a mechanism involving the firing of well-connected nodes.
Insights into the Computation for π
Published electronically November 10, 2015
Authors: Joshua C. Fair, Neal C. Gallagher III, Maura Gallagher, Nicholas M. Fair, Jason Cannon-Silber, Brandon W. Mayle, Samuel J. Konkol (Socrates Preparatory School)
Sponsor: Neal C. Gallagher (Socrates Preparatory School)
Abstract: Searching for new ways to compute the number π leads us to an analysis of the geometric approach of Archimedes and the recursive Calculus approach of Newton-Raphson. Archimedes estimates π by inscribing a circle with a regular polygon and calculates the circumference of the circle using the perimeter of the polygon. Unlike Archimedes, we propose a method using irregular polygons. When the computations are performed with an equal number of sides, the irregular polygon method produces a more accurate value for π than does the regular polygon method. The Newton-Raphson analysis leads to several interesting results. First, we find a rapidly converging recursive formula that computes π to eight decimal places after only three iterations. Second, the analysis leads to a Fourier series expansion for g(x) = arcsin(sin(x)). This Fourier series results in several numerical series that can be used to compute π.
A Reliability Analysis of Personnel Protection Systems at the Spallation Neutron Source
Published electronically December 3, 2015
Author: Adam Spannaus (University of Tennessee)
Sponsor: Kelly Mahoney (Oak Ridge National Laboratory)
Abstract: In this paper we present and analyze field gathered system reliability data from the Oxygen monitoring and Access Control systems managed by the Protection Systems team at the Spallation Neutron Source at the Oak Ridge National Laboratory. The time span of the data varies from roughly fourteen years for systems in place as the linear accelerator was being built and tested, to a few months for the newest instrument beam lines. We analyzed the time to fail for the above systems, accounting for censored data, and developed a nonparametric probability model of the overall reliability for each system. From this probabilistic model, we were able to estimate the survivor and hazard functions, and find good correlation between these estimates and their empirical counterparts. Moreover, we used our model to find whether or not each system’s life distribution is New Better than Used.
Published electronically December 10, 2015
Authors: Weronika J. Swiechowicz and Yuanfang Xiang (Illinois Institute of Technology)
Sponsor: Sonja Petrovic (Illinois Institute of Technology)
Abstract: Given observed data, the fundamental task of statistical inference is to understand the underlying data-generating mechanism. This task usually entails several steps, including determining a good family of probability distributions that could have given rise to the observed data, and identifying the specific distribution from that family that best fits the data. The second step is usually called parameter estimation, where the parameters are what determines the specific distribution. In many instances, however, estimating parameters of a statistical model poses a significant challenge for statistical inference. Currently, there are many standard optimization methods used for estimating parameters, including numerical approximations such as the Newton-Raphson method. However, they may fail to find a correct set of maximum values of the function and draw incorrect conclusions, since their performance depends on both the geometry of the function and location of the starting point for the approximation. An alternative approach, used in the field of algebraic statistics, involves numerical approximations of the roots of the critical equations by the method of numerical algebraic geometry. This method is used to find all critical points of a function, before choosing the maximum value(s). In this paper, we focus on estimating correlation coefficients for multivariate normal random vectors when the mean is known. The bivariate case was solved in 2000 by Small, Wang and Yang, who emphasize the problem of multiple critical points of the likelihood function. The goal of this paper is to consider the first generalization of their work to the trivariate case, and offer a computational study using both numerical approaches to find the global maximum value of the likelihood function.