Hello! Thanks for visiting my webpage! I am a PhD student in the College of Optical Sciences (OSC) at the University of Arizona.
I completed my undergraduate degrees in physics and computer science at the University of California, Berkeley.
I am most excited by projects in computational imaging and computer graphics that aim to produce new functionalities, modalities, and theories!
More generally, I love learning about topics that are highly visual, geometric, and mathematical.
I am currently conducting research in the Intelligent Imaging and Sensing Lab led by Professor Amit Ashok. My work integrates compressive sensing and quantum parameter estimation to achieve generalized super-resolution imaging of incoherent scenes.
Within my graduate college, I am co-president of the OSC Journal Club (check out the archive of past presentations!) - an organization that seeks to reinforce peer-to-peer pedagogy while informing students about the most exciting developments in optics.
I am also an NSF GRFP fellow funded by grant No. DGE-2137419. Outside of academics you will probably find me reading fantasy, making music, rock climbing, playing soccer, or exploring mountain roads on my motorcycle.
PORTFOLIO
RESEARCH
My past research experiences span a range fields including quantum optics, computational imaging, compressive sensing, magnetic levitation, and fluid dynamics. The project descriptions provide a general overview of the motivations and central results of each study. If you find your interest piqued, click on the image headings to access a publication on the project!
Discovering exoplanets in orbit around distant stars via direct imaging is fundamentally impeded by the high dynamic range between the star and the planet. Coronagraphs strive to increase the signal-to-noise ratio of exoplanet signatures by optically rejecting light from the host star while leaving light from the exoplanet mostly unaltered. However it is unclear whether coronagraphs are an optimal strategy for attaining fundamental limits relevant exoplanet discovery. In this work, we report the quantum information limits of exoplanet detection and localization specified by the Quantum Chernoff Exponent (QCE) and the Quantum Fisher Information Matrix (QFIM) respectively. In view of these quantum limits, we assess and compare several high-performance coronagraph designs that theoretically achieve total rejection of an on-axis point source. We find that systems which exclusively eliminate the fundamental mode of the telescope without attenuating higher-order orthogonal modes are quantum-optimal in the regime of high star-planet contrasts. Importantly, the QFIM is shown to persist well below the diffraction-limit of the telescope, suggesting that quantum-optimal coronagraphs may further expand the domain of accessible exoplanets.
We explore using optical mode-sorting measurements to estimate the state of an oscillating optomechanical membrane. The estimation parameters of interest are the time-dependent amplitude coefficients of the membrane's transverse vibrational modes. We also consider the effect of measurement back-action induced by radiation pressure in our parameter estimates by postulating a quantum Hamiltonian for the system. In principle, this work could find applications in imaging-based (cavity-free) optomechanical metrology and active membrane cooling using feedback and structured illumination.
We implement a direct imaging coronagraph that rejects all light from an on-axis star using a double-pass spatial mode sorter. Our experimental setup can precisely localize exoplanets below the diffraction limit at 1000:1 star-planet contrast.
Multi-Aperture systems have been used to enhance the resolution of classical imaging systems while subverting the engineering challenges associated with making large contiguous optical components. Here we demonstrate the advantage of quantum modal imaging applied to multi-aperture systems for estimating the locations of point sources in a sub-rayleigh constelltion.
Interestingly, the relative performance advantage of modal imaging over direct detection improves as the constellation separation distances become smaller and the system becomes more photon-starved.
Mask-based computational cameras are uniquely capable of recovering 3D information from a single multiplexed 2D sensor measurement. This research seeks to augment the 3D reconstruction volumes of mask-based cameras by employing multi-sensor arrays to synthesize an enlarged sensing area.
Under this objective, my work has primarily addressed theory, simulation, and prototyping. I derived a guiding mathematical model motivating our exploration of large-sensor platforms within a ray-optics framework. Improvements predicted by our model include enhanced voxel resolution and extended depth-of-field for 3D reconstruction volumes. I also wrote data pipelines that interface Matlab and Python with Zemax OpticStudio to explore the design space for sensor arrays. These pipelines characterized design performance by comparing simulated 3D scenes and measurements with the corresponding reconstructions.
Informed by these simulated heuristics, we developed a prototype camera featuring a 2x2 sensor array which successfully recovers wide-field images with merely 8.6% of the pixels required for a conventional lensed system! To achieve this, I adapted existing DiffuserCam reconstruction algorithms and calibration methods to address structured erasures inherent in multi-sensor measurements.
Magnetic Levitation (MagLev) of biological samples in a paramagnetic solution has been used to perform density-based characterizations of
living cells and to conduct medical diagnostics. In this work, we developed a cost-effective MagLev instrument for measuring densities across a
broad collection of assay types in a high-throughput fashion. Two neodinium permanent magnets oriented in an anti-helmholtz format produce a strong quasi-uniform magnetic field in which a sample is placed.
Consequently, buoyant, magnetic, and graviational forces stratify the sample along the direction of this magnetic field based on density. By tuning the paramagnetic solution, the range of measurable densities
or the precision of the measurement can be adjusted. The video above shows this phenomenon in action. Here, a collection of microbeads appear stratified vertically; vertical displacement encodes their density.
We can refine the density uniformity of the microbead population by extracting only those within a certain height range, as shown.
The high-throughput instrument we present in this study is compatible with standardized assay equipment (a 96-well plate) and consumer-level flatbed scanners.
Using CAD modelling in Solidworks and rapid
prototyping technology (i.e. 3D printers and laser cutters) I designed the optical hardware for our device. Specifically, I created the optical
assembly as a grid of collinear relay lenses and planar mirrors. This component transmits an orthogonal in-focus view of levitated samples to
the surface of a flatbed scanner, enabling data acquisition. In addition, I supported writing image processing programs in MATLAB for analyzing
levitation data to discern sample densities. In practice, this affordable and simple platform could bring point-of-care medical diagnostics to
regions facing serious endemics
Jumping is a unique form of locomotion that spans aerial and terrestrial domains.In this research, we investigated the aeromechanics governing stable insect jump trajectories, specifically for spider crickets. These insects jump impressively long distances in proportion to their size yet consistently land upright.
Using 3-dimensional videogrammetry methods, we tracked and plotted the cricket’s center of mass (CoM) and limb positions over the course of its jump trajectory. We then developed Matlab programs to compute drag force vectors acting on each component of the insect as well as the net torque acting about its Center of Mass from the jump data. These analyses indicated that the cricket’s aerial posture generates restoring torques when perturbed. The stabilizing posture prominent in the descent period of the jump trajectory could be vital to successfully replicating this form of locomotion in robotic systems.
COURSEWORK
A TA of mine once proclaimed that the greatest hack mathematicians ever discovered was practice. This fact has not been overlooked in my coursework which consists predominantly of problem sets; an unreasonably effective approach to learning technical material. Unfortunately, problem sets do not ordinarily make for good featurable material. Instead I have decided to showcase final projects associated with some of my courses. These final projects offer amalgam of material learned throughout the course.
In this work we adapt a light transport pipeline to render the chromatic dynamics of bubbles. Physically, those mesmerizing swirls of color visible on the surface of soapy films are created by small differences in the thickness of the film.
Thin film interference explains why certain colors appear more than others for different thicknesses. When light interacts with the top surface of the thin film, some light is reflected and some is transmitted to the bottom surface.
Upon striking the bottom surface, the light is again partially reflected and partially transmitted. However, the two reflected components each pick up a different phase related to the thickness of the film. Consequently, they interfere constructively or destructively at different wavelengths which is why we see some
colors and not others.
In this work, we pair the physics of thin film interference with the physics of fluids to accurately characterize the evolution of the chromatics in time. We numerically integrate the 2D Navier-stokes equations defined over the surface of a sphere to propogate a density flow field. Interpreting the density
term as a thickness, we update the radiance of each ray intersecting a thin-film surface based on its wavelength and the thickness of the film at the intersection point.
A new method for measuring the speed of wave propagation in soapy membranes(i.e. bubbles) is presented.
The experimental configuration uses only a speaker, an LED, and a photodiode. These components operate in unison as follows:
The speaker supplies a tranverse acoustic drive to a circular soap membrane. Simultaneously, a LED strobes the membrane at the same frequency as the acoustic drive while a photodiode collects the light reflected off the membrane.
In situations where the membrane is in a resonant mode, the membrane oscillates at the same frequency as the drive frequency (with a phase shift). Under strobed illumination, the profile of normal mode will appear stationary. Hence, the photodiode should register a constant signal.
The resonant frequencies of the membrane can be related directly to the propagation speed of waves in the media. After subjecting the membrane to an acoustic frequency scan, we employ a search algorithm over the photodiode signal to identify the resonant frequencies of the membrane.
This peceptually realistic cloth simulator is driven by equations of motion from classical mechanics. Conceptually, the cloth is modelled as a 2D grid of point masses connected by an isometric spring lattice. The motion of each point mass on the grid is governed by the cumulative effect of the spring forces (obeying Hooke's Law) and external forces. The simulation propagates the position of each point mass forward in time using Verlet Integration techniques. While there are energy losses in this technique, the simulator retains stability.
The simulator also addresses collisions with objects in the scene as well as self-collision effects to prevent the cloth from passing through itself. Different cloth materials can be simulated by tuning physical constants like the spring constant and the cloth density. The program uses shaders for visual rendering which undercuts the computational costs of ray tracing (it runs near real-time on my laptop). I found myself deeply drawn to the underlying physical model enabling these simulations. I was surprised to see how perceptually convincing the simulator ran despite some unphysical conditions imposed on the propagator, which were required for stability. After careful scrutiny, I found that these seemingly unphysical constraints were actually loose approximations of certain real-world phenomena.
For instance, the propagator caps the spring extension/contraction length to 10% of the rest length. Initially, this seems like a highly unphysical constraint as it impliess that the spring constant as a function displacement from the rest length looks like an infinite potential well. Clearly a strong deviation from Hooke's Law. After some investigation though, I found that many materials have a spring constant that varies exponentially as their stretched. Therefore, an infinite well is not all too far from a potential that varies exponentially with respect to the displacement from the rest length.
This study explores the relationship between the resonance frequency of a Helmholtz resonator
and two of its geometric parameters: the cavity volume and the neck length. Spectrograms of the
audio signal generated by different resonator geometries are collected and used in a spectral analysis
of the resonant frequencies. Power law fits of the resonance frequencies measured for different cavity
volumes and different effective neck lengths were computed. Theoretical predictions are in greater agreement with the power law relationship found
for the cavity volume than for the neck length. An analysis of the measurement error and the
damping (Q factor) of the Helmholtz resonator are also provided.
This work is an exploration in non-linear dynamics and chaos. Our system of focus is the PN
junction - a driven oscillatory circuit containing a semiconducting diode with a non-linear response.
Of principal interest is the relationship between the chaotic parameters and the dynamical variables.
For the PN junction these are, respectively, the amplitude of a tunable voltage source and the electrical current of the circuit. The dynamics of this system are analyzed in simulation. In particular,
we numerically integrate the dynamical equations of the circuit and explore the periodicity of state
space orbits. Bifurcation diagrams plotted over a progression of the chaotic parameter demonstrate
sequential period doubling and cycling between chaotic and stable bands. Informed by these visualizations, we identify a strange attractor for the circuit. We also explore delay time embeddings of
the current in two and three dimensions. Finally, we compute the largest Lyapunov exponent for
different values of the chaotic parameter.
In this work we explore the influence of temperature on
the electrical properties of an Aluminum-doped Germanium semiconductor. Specifically, we determine the free
charge carrier density, the resistivity, and the charge mobility of the semiconductor as function of temperature by
measuring the Hall coefficients using the Van der Pauw
technique
We perform image classification for objects smaller than the Rayleigh resolution limit using diffractive optical neural networks. We also compare the performance of this technique to that of a digital classifier trained on spatial mode sorting measurements and a classifier trained on direct detection measurements.
The apparatus depicted above measures the density of biological samples in a high-throughput fashion by performing two functions:
(1) Using permanent magnets configured to produce a uniform field over the sample, the apparatus spatially stratifies biological samples suspended in a paramagnetic fluid based on their densities. The vertical positions of the stratified samples encodes their density. For those interested, a more detailed description of this phenomenon is provided in the research section under the project
titled MagLev Density Measurement.
(2) Using an array of 2F optical systems (relay lenses) and mirrors, an image of each stratified sample is projected to the plane of a household flatbed scanner. In this way, we can easily collect an image of each sample in parallel by running the flatbed scanner with the apparatus placed on top.
The apparatus is designed to be cost-effective and house a 96-well plate, making it compatible with standard pharmeceutical and medical hardware. In this regard it can be easily integrated into the lab environment. It provides a way of recording the sample positions in an image and enables future digital processing.
Interfometric imagers are a class of computational camera that use interference to measure the 2D Fourier Transform of a target scene. This imaging method has been wildly succesful in synthesizing high resolution images of the distant cosmos using a network of coordinated telescopes. A salient example is the first-ever image of the shadow of a black hole captured in 2019. The measurements for this image were collected by the Event Horizon Telescope (EHT) network and used interferometric imaging techniques.
Fundamentally, this method operates on the same principles as the Young's Slits experiment. The strength of different spatial frequencies in a scene is encoded in the intensity of a cosine interference signal, known as a fringe. By varying the separation between the slits, we can sample different spatial frequencies in the scene. In practice, interferometric imagers use focusing apertures rather than slits.
The distance between these apertures (or telescopes in the case of astronomy) is referred to as the baseline.
Recently, there has been a growing effort to miniaturize interferometric imagers as they afford high resolution imaging without the bulky optics of traditional focusing imagers. This miniaturization has been achieved by performing interferometry on a photonic integrated circuit (PIC). To date the PIC architectures for miniaturized interferometers rely on pair-wise beam combination.
In this patent we propose interfering multiple beams simultaneously to generate fringes with higher information densities. For a non-redundant baseline configuration, which is to say that the collecting apertures are positioned such that the distance between any pair of apertures is unique, such a design drastically increases the number of spatial frequencies sampled in a scene for a given number of input apertures. For the same number of input apertures N, the number of spatial frequency samples collected with this design increases by a factor of N compared to previous architectures.
Existing decontamination systems fail to provide quantitative information about their effectiveness within a target environment. This patent was motivated by the social, economic, and human loss suffered during the COVID-19 pandemic. These losses have exposed a lack of innovation in the space of consumer-level spatial decontamination technologies. In this patent we propose an Ultraviolet decontamination wand outfitted with a computer vision system that relays
the exposure supplied to a target surface back to the user. In this way a quantitative level of decontamination (e.g. %99.999 pathogenic deactivation) can be guaranteed throughout the environment.
UV light has been shown to sever bonds in genetic material and prevent transcription and translation processes in the cell by forming pyrimidine dymers. It is therefore an effective decontamination technique against all forms of pathogens, both living (microbial) and non-living (viral). The survival probability of these pathogens falls exponentially with the level of UV dosage (energy per unit area) supplied. Since the dosage is a function of
exposure time and irradiance, a 3D visual system on the decontamination wand can compute the dosage based on a surface's distance to the wand, the angle of incidence, the power of the UV bulb, and the integrated exposure time. The exposure level is relayed to the user in an interprettable colormap that indicates which regions in the environment have been sufficiently decontaminated wand which need further attention.
PERSONAL PROJECTS
Here are some of my personal projects. Most are fun little programs and algortihms I have implemented in my free time. In addition to computational imaging and graphics, I have always been fascinated by fluid dynamics, flight, and aircraft design. Some of the projects shown below are manifestation of these additional interests.
The boids algorithm is a simple model for flocking behavior found naturally in schools of fish and certain bird populations. The algorithm considers interactions between a boid (bird-oid) object and its neighbors as well as the dominant flight direction of the entire flock in order to update trajectories. In each time step, the algorithm calls for the following updates:
1) flock cohesion (Uses the separation from neighboring boids to determine which direction a boid should move to join the flock)
2) flock alignment (which uses the velocities of the neighboring boids to determine the net direction of the flock)
3) collision avoidance (Uses the separation from neighboring boids to determine how a boid should move to avoid collisions with other members of the flock)
Each of these updates has an associated weight that can be tuned to modify certain characteristics of the flocking behavior. For instance, granting more weight to the flock alignment update weight will result a more uniform set of trajectories within the flock.
This 2D gravity simulator uses a simple Forward-Euler method for the underlying propagator of the planet positions. The update step for each planet is determined by modelling all gravitating objects within the simulation as point-like and applying Newton's Law of Gravity. Interestingly, this familiar law takes a new form when we work in two dimensions instead of three dimensions. It no longer follows the inverse square law but rather adopts a proportionality to 1/r. Even more interesting is the fact that escape velocities are impossible under this new gravitation regime!
Voronoi tesselation is often used as a model for cellular structures found in nature. It is also really beautiful! The process of generating these patterns is as follows: We begin by randomly populating a plane with seeds of different colors. Seeds are special points that will tell us how to partition the surrounding space into fitted polygonal cells. We then color every non-seed point on the plane the same color as the seed it is closest to - the seed from which its Euclidean distance is a minimum.
Phase retrieval algorithms solve the Phase Problem . That is they recover the Fourier Transform of a real, non-negative function using only measurements of the Fourier Modulus - no phase information. Since the scenes we image in the physical world are real (an intensity distribution in space) and the non-negatve (the notion of negative intensity does not exist), these algorithms have been deployed with remarkable success in a variety of imaging applications. One such application has been in interferometric imagers (see patents section). Phase information in fringe measurements is often corrupted through effects like atmospheric turbulence and optical aberrations. The amplitude however is generally left in tact. Here I have implemented the Hybrid Input-Output (HIO) algorithm . The figure an estimated reconstruction of the scene converging to the ground truth after only a few hundred iterations. Even with the phase information totally lost, these algorithms can recover the original scene up to a translation or reflection. Such transformations amount to global phase factor in the estimate of the fourier transform which is ignored in the loss function.
Electric Vertical Takeoff and Landing (E-VTOL) are the future of personal aviation. Companies like Aurora and Opener are in a race to develop vehicles that are highly compatible with increasingly urbanized societies. The figure above showcases our E-VTOL design submission to Boeing's Personal Air Vehicle (PAV) competition. The aircraft achieves VTOL capabilites by rotating the twin-propeller wing and redirecting an exhaust nozzle downwards. The design is optimized for good aerodynamic efficiency during vertical ascent and in horizontal flight. We used Computational Fluid Dynamics programs to assess the pressure profile shown over the aircraft surfaces in both vertical and horizontal flight configurations.
I know. It's a rather cliche fractal to animate but the Mandelbrot set has some amazing properties outside of its captivating graphics. Before discussing those though let us go over what the Mandelbrot set is and what this visual represents. A point c in the complex plane is considered part of the Mandelbrot set if it remaisn stable (does not diverge) under infinite iteration of the update rule
zn+1 = zn2 + c where z0 = 0 . The black regions of the plane depicted in the animation constitute points in the Mandelbrot set. The remaining regions are unstable and are colored based on their rate of divergence. The Mandelbrot set is in fact a special case of a Julia set. In general, unique Julia sets are defined by different initializations of z0. It is astounding that such structure can emerge from such a simple update rule. For more resources on this topic I suggest the following educational math content:
Smooth Mandelbrot Shading - Inigo Quilez The Mandelbrot Set - Numberphile What's so special about the Mandelbrot Set? - Numberphile
Schlieren imaging is a technique for observing slight changes in refractive index. Non-homogeneous variations in the refractive index produce small deflections in the light path associatd with different regions of the imaging system's field of view. These variations can be visualized by masking/filtering the deflected rays optically or by detecting perturbations using image processing. A certain computational method for Schlieren imaging called Background Oriented Schlieren (BOS) is addressed here. In this project I implemented two Background-Oriented Schlieren (BOS) algorithms and applied them to videos of supersonic T38 Talons acquired by NASA. The objective of this application was to recover the mach cones of the aircraft.
Mach cones are a particularly interesting application space for BOS for two reasons. First, the angle generated by the mach cone can be used to measure airspeed (although other more precise techniques exist). Second, BOS imaging also addresses the problem of detecting supersonic stealth aircraft with radar. All supersonic aircraft generate a pressure wave. While the size of the pressure wave depends on the aerodynamics of the vehicle, it still induces a perceptible refractive index change.
In general, BOS algorithms use adjacent video frames to detect small pixel perturbations caused by ray deflections. The first version of this algorithm that I implemented performs a cross-correlations between common subregions of adjacent video frames. For each pixel, we define a small neighborhood of pixels surrounding it. Then we take two adjacent video frames and apply a cross-correlation between common neighborhoods. Indentifying the peaks in these cross-correlations amounts to finding the displacement of each pixel. The second version involves
optical flow techniques. The purpose of optical flow is also to determine pixel displacement. However, instead of using cross-correlations, optical flow algorithms employ the constraint that the total image intensity should remain constant accross frames. In other words, when a pixel is displaced, its intensity is unaffected. Under this assumption, optical flow relates the spatial gradient of the image intensity to the temporal gradient of the image intensity using adjacent video frames.
The Gouy phase is a phenomena observed in focused lasers. In particular, the electricmagnetic field of the laser picks up an π phase shift after passing through the focus and propagating to infinity. Professor Robert Boyd at the University of Rochester presented an intuitive geometric derivation for why this phase shift appears in gaussian beams. He compared the true optical path length of the beam waist to that of a reference ray that passing through a perfect focus. The purpose of this study was to numerically explore whether this argument extends to higher-order Hermite-Gauss modes. It was found that the Boyd interpretation accurately characterizes the net Gouy phase in the far-field for all modes. However, it fails to succesfully describe the evolution of the phase at intermediate distances from the focus.
The aerodynamic efficiency of soaring birds can be accredited, in part, to a distinct anatomical feature of their wing: Primary feathers. In this project, I investigated how primary feathers influence wingtip vorticity and induced drag by experimentally measuring aerodynamic forces and conducting flow-physics simulations. Using a home-built wind tunnel and a modular base wing, I measured aerodynamic lift and drag for a series of feather profiles and angles-of-attack (AOA). The ratio of these force measurements quantitatively characterized the efficiency of the wing configuration. To establish reliable baselines, I performed flow simulations using XFLR5 on comparable virtual wings. These baselines corroborated the experimental values acquired using the wind tunnel.