Jonathan R. Landers

Jonathan R. Landers

Research and Exploration Space

GitHub LinkedIn ORCID arXiv

I am an applied machine learning scientist with experience working across artificial intelligence, data science, energy, engineering, and advanced analytics. My research interests span machine learning, statistical learning theory, geometric computation, physics (with an emphasis on its connections to computation), applied mathematics, and theoretical computer science.


This site is a space for sharing notes, articles, and ongoing explorations; less-polished ideas that may develop into formal papers or remain open inquiries. Alongside my GitHub, LinkedIn, ORCID, and arXiv, it offers a view into how I approach problems, experiment with methods, and build conceptual connections across disciplines.

Research Notes

Stochastic Redistribution vs. LP Surrogates for Circuit Balancing

A work-related exploration of optimal energy grid allocation framed through the lens of stochastic optimization. This investigation models service-point imbalances driven by lognormal load deviations and shows that, under exogenous dynamics, a seemingly discrete redistribution problem collapses to an exact convex program. The result bridges practical grid engineering with elegant stochastic theory, revealing when linear relaxations remain integral, and when real-world coupling demands MILP or MIQP extensions.

Forrelation, Bent Functions, and a Spectral Criterion for Hardness

This note unifies the random-oracle and bent-function regimes of the Forrelation problem into a continuous spectral framework. By introducing a quantitative criterion based on Fourier flatness and resilience, it defines a smooth spectrum of classical hardness and quantum advantage. The discussion bridges Aaronson–Ambainis and Girish–Servedio, extending Forrelation theory through a geometric and information-theoretic lens.

Decomposing Time Series into Marginal and Dependence Components

This note presents a compact factorization of a univariate time series into a marginal law of values and a dependence process of ranks, formalizing the informational split between “what occurs” and “how it is arranged in time.” Using Sklar’s theorem, it connects copula decomposition to information-theoretic and PAC-learnability frameworks, showing how modular learning of marginals and dependence composes into learning the full process.

Opposite-Skewness Symmetry & TV Bounds

A short, self-contained note showing how two scaled Beta distributions can share the same support, mean, and median while exhibiting opposite skewness via centered reflection. It gives a clean total-variation expression in terms of the density’s overlap with its mirror and a small-imbalance approximation that links TV directly to parameter differences. In ticket-pricing applications, this symmetry reflects opposite-skew pricing distributions for the same event snapshot, highlighting why full-shape geometry (not just mean/variance) matters for model robustness and divergence-based evaluation.

For related work and broader context, see
my arXiv paper .

Neural Net Dropout Viewed as Probability-Mass Dilution

This short note extends the implicit regularization mechanism proved in my Random Forests work to neural networks. By viewing dropout as random thinning of active units, it derives a concise odds-compression inequality showing how dropout reduces dominance of high-scoring units. The note directly connects to Section 5 of my arXiv paper .

On the Geometry of Fractal Boundaries in ReLU Networks

This paper develops a constructive and theoretical account of fractal geometry in neural network decision boundaries, introducing explicit ReLU modules that realize tent-map and Cantor-style refinements with provable dimensions. It proves that exact self-similarity arises only under measure-zero weight settings, while empirical probes reveal finite-range fractal mimicry and propose boundary fractal dimension as a diagnostic of overfitting.

Quantum-Accelerated Stabilization for Markov Chains

This note introduces a band–window stabilization criterion for Markov-chain averaging and demonstrates a quadratic quantum speedup via amplitude estimation. It contrasts classical and quantum complexities, presenting both immediate and anytime stabilization theorems with clear operational interpretations.

Limits of Hawking-Induced Magnetism

This series of short notes explores the boundary between quantum evaporation effects and classical plasma dynamics around Kerr black holes. Each note tackles a different facet of why Hawking-induced magnetic fields remain negligible and structurally undetectable compared to accretion-disk–driven fields.

Misspecification, Quantile Mobility, and Arc Length

This note unifies three threads: distributional misspecification bounds for a lognormal truth fit by a moment-matched normal; quantile mobility difficulty under additive vs. multiplicative geometries; and a geometric relation between quantile shifts and the arc length of the PDF, clarifying why mobility differs across shapes.

Animated Demos

Magnetic Reactor Clock

A minimalist physics clock driven by an electron orbiting a loop. The simulated current and Biot–Savart field evolve with real time, with a live readout and cumulative |B|. Blends clean WebGL shaders with an elegant EM formulation for a memorable, functional demo.

Mark Twain fMRI

A playful reading visualization: as a Twain passage scrolls and words highlight, simulated brain regions (frontal, visual, auditory, motor, emotional, spatial) light up in a 3D point cloud. Technically polished and human-centered, showcasing lateral thinking and real-time rendering.