Physics generally treats time evolution as continuous and deterministic: given initial conditions, the equations of motion trace out a unique trajectory. But in a computational or discrete setting, that determinism becomes a constraint problem. At each time step, we face a finite but enormous set of candidate next states, and conservation laws tell us which ones are physically admissible.
We model physical time evolution as a one-step state–transition problem. The goal is to (i) bound and count the number of energy-conserving candidate next states under discretization, and (ii) measure how far non-conserving candidates lie from the conserving set using geometry. This reframes Noether's theorem from an abstract statement about symmetry to a concrete question: how much of state space is actually reachable?
Let the system state be an $n$-dimensional real vector
$$ w = (w_1,\dots,w_n) \in \mathbb{R}^n. $$
Each coordinate is bounded,
$$ w_i \in [a_i,b_i], $$ $$ i = 1,\dots,n. $$
Fix a discretization scale $\epsilon>0$ and let $\mathcal{S}_\epsilon$ denote the finite set of grid states in the bounded box,
$$ \mathcal{S}_\epsilon \subset \prod_{i=1}^n [a_i,b_i]. $$
We interpret $\mathcal{S}_\epsilon$ as the set of all candidate next states at $T=1$ that are representable at resolution $\epsilon$.
Let $H : \mathbb{R}^n \to \mathbb{R}$ be a time-independent Hamiltonian. Given an initial state $v_0 \in \mathcal{S}_\epsilon$, define its energy
$$ E_0 := H(v_0). $$
For a tolerance $\tau\ge 0$, define the set of energy-conserving candidate next states
$$ \mathcal{C} := $$ $$ \{\, w \in \mathcal{S}_\epsilon \;:\; |H(w) - E_0| \le \tau \,\}. $$
Let $\mathcal{N} := \mathcal{S}_\epsilon \setminus \mathcal{C}$ be the non-conserving candidates.
Define the total number of candidate next states and the number that conserve energy:
$$ N_{\mathrm{all}} := |\mathcal{S}_\epsilon|, $$ $$ N_{\mathrm{cons}} := |\mathcal{C}|. $$
Under a uniform sampling model over the bounded domain, the discrete ratio approaches a probability mass as the grid is refined:
$$ \frac{N_{\mathrm{cons}}}{N_{\mathrm{all}}} \;\longrightarrow\; $$ $$ \Pr\big(\,|H(w)-E_0|\le\tau\,\big). $$
Equip the state space with the Euclidean metric $\|\cdot\|_2$. Define the distance-to-conservation of a candidate state $w\in\mathcal{S}_\epsilon$ by
$$ \delta(w) := \inf_{c\in\mathcal{C}} \|w-c\|_2. $$
This provides a graded notion of violation: $\delta(w)=0$ for conserving candidates, while $\delta(w)>0$ measures how far a non-conserving state lies from the conserving set.
Comment. The factor $\|\nabla H(w)\|^{-1}$ converts an energy discrepancy into a distance in state space: large gradients mean small state changes can fix the energy, while small gradients mean the same energy error corresponds to a larger geometric correction.
For two particles in two spatial dimensions, a convenient choice is $n=8$ with
$$ w = $$ $$ (x_1,y_1,x_2,y_2,p^x_1,p^y_1,p^x_2,p^y_2). $$
With masses $m_1,m_2>0$, separation distance
$$ r(w) := $$ $$ \sqrt{(x_1-x_2)^2 + (y_1-y_2)^2}, $$
and a time-independent interaction potential $U$, the Hamiltonian is
$$ H(w) = $$ $$ \frac{(p^x_1)^2+(p^y_1)^2}{2m_1} + $$ $$ \frac{(p^x_2)^2+(p^y_2)^2}{2m_2} + $$ $$ U\big(r(w)\big). $$
The formalism above then quantifies (i) how many bounded, discretized candidates at $T=1$ are consistent with $E_0$, and (ii) how far inconsistent candidates lie from physical admissibility.
Noether's theorem tells us that symmetries constrain dynamics, but constraints are only interesting when there's something to constrain. By discretizing state space and counting, we make the reduction explicit: energy conservation doesn't just preserve structure, it eliminates exponentially many candidate futures. The $n$-dimensional space of possibilities collapses onto an $(n-1)$-dimensional shell, and the distance metric $\delta(w)$ tells us exactly how far we've strayed when numerical methods or approximations break that constraint.
This isn't just bookkeeping. In a computational setting, knowing that most of $\mathcal{S}_\epsilon$ is physically inadmissible means we can design integrators that respect geometry rather than fight it. And philosophically, it reframes conservation laws: they're not mysterious metaphysical principles but dimensional filters, geometric choke points that physical systems must thread through. The universe doesn't "choose" energy-conserving paths, there simply aren't any others available. That's the power of symmetry: it doesn't guide nature, it defines the only roads nature can take.
It can help to see the "energy shell" as an actual surface. Here is a minimal 3D slice where energy conservation becomes a literal geometric constraint.
Consider a time-independent Hamiltonian in three coordinates $(x,y,p)$:
$$ H(x,y,p) := $$ $$ \frac{p^2}{2m} + \frac{k}{2}(x^2+y^2). $$
Fix a conserved energy level $E_0>0$. The conserving set is the level set
$$ H(x,y,p) = E_0. $$
Solving for $p$ gives a two-sheet surface over the disk where the square root is real:
$$ p(x,y) = $$ $$ \pm \sqrt{ 2m \Big( E_0 - \frac{k}{2}(x^2+y^2) \Big) }. $$
In the language of this note, this surface is the continuous analogue of the conserving set $\mathcal{C}$: one scalar constraint removes one degree of freedom, collapsing the admissible candidates onto a codimension-one shell. The "distance-to-conservation" $\delta(w)$ is then (locally) the Euclidean distance from a point to this surface.