Long-Horizon Sensitivity of CAGR Under Infinitesimal Model Perturbations

J. Landers

1. Context and Objective

When a model is used to forecast over long horizons, the familiar intuition that "small model errors cause small prediction errors" becomes unreliable. The issue is not that the model suddenly becomes nonlinear (even linear models can fail), but that long horizons turn mild local disagreements into large terminal disagreements.

Compounding is an amplifier. The question is: along which directions does the amplifier actually listen?

This note isolates a clean scalar target (CAGR) and computes its first-order sensitivity to infinitesimal weight perturbations. The result is geometric: the only thing that matters is alignment with an empirical displacement vector.

The goal is not to build a full stochastic theory, but to expose a minimal, operational statement: long-horizon CAGR is a smooth functional of the weights, and its local fragility has a closed-form directional derivative.

2. Setup

Let \(Y(t) > 0\) denote an asset value over time \(t\in[0,T]\). Define the log-value

\[ g(t) := \log Y(t). \]

Log-space is natural because constant CAGR corresponds to linear growth in \(g(t)\). Assume a linear regression model predicts log-value from features \(x(t)\in\mathbb{R}^n\):

\[ \hat g_w(t) = w^\top x(t),\qquad w\in\mathbb{R}^n. \]

This is intentionally austere: the sensitivity phenomenon already appears in the simplest setting. Everything below is deterministic given a realized feature path \(x(t)\).

3. CAGR as a Functional of \(w\)

Over horizon \(T\), the predicted log-growth is

\[ \widehat{G}(w) := \hat g_w(T) - \hat g_w(0) = w^\top\big(x(T)-x(0)\big). \]

Introduce the empirical feature displacement \(\Delta x := x(T)-x(0)\). Then

\[ \widehat{G}(w) = w^\top\Delta x, \qquad \widehat{\mathrm{CAGR}}(w) = \exp\!\left(\frac{w^\top\Delta x}{T}\right)-1. \]

In particular, long-horizon CAGR is a scalar function of \(w\) whose only dependence on the realized history \(x(t)\) is through a single vector \(\Delta x\).

4. Infinitesimal Perturbations and the Directional Derivative

Perturb the weights in direction \(v\in\mathbb{R}^n\) via \(w(\varepsilon) = w + \varepsilon v\). Then

\[ \widehat{G}(w+\varepsilon v) = (w+\varepsilon v)^\top \Delta x = w^\top\Delta x + \varepsilon\, v^\top\Delta x. \]

Differentiating \(\widehat{\mathrm{CAGR}}(w+\varepsilon v)\) with respect to \(\varepsilon\) yields the exact first-order response.

Theorem 1 (First-order long-horizon CAGR sensitivity).

For any direction \(v\),

\[ \frac{d}{d\varepsilon}\,\widehat{\mathrm{CAGR}}(w+\varepsilon v)\Big|_{\varepsilon=0} = \left(1+\widehat{\mathrm{CAGR}}(w)\right)\cdot \frac{v^\top\Delta x}{T}. \]

The multiplier \((1+\widehat{\mathrm{CAGR}}(w))\) is the familiar exponential "compounding" factor. The geometric content is the inner product \(v^\top\Delta x\): only the projection of the perturbation direction onto the displacement matters.

Two immediate consequences are worth stating in words:

5. Geometric Interpretation: Slope, Rays, and Angle

In the \((t,g)\) plane, the model implies an average predicted log-slope

\[ \hat k(w) := \frac{\hat g_w(T)-\hat g_w(0)}{T} = \frac{w^\top\Delta x}{T}. \]

Think of this as a ray emerging from the origin with angle \(\hat\theta := \arctan(\hat k)\). Under \(w(\varepsilon)=w+\varepsilon v\), the slope changes by

\[ \delta\hat k = \frac{v^\top\Delta x}{T}\,\varepsilon. \]

Converting slope to angle gives a "rotation" view:

\[ \delta\hat\theta = \frac{1}{1+\hat k^2}\,\delta\hat k = \frac{1}{1+\hat k^2}\cdot \frac{v^\top\Delta x}{T}\,\varepsilon. \]

So an infinitesimal parameter perturbation rotates the prediction ray. The rotation magnitude depends only on alignment with \(\Delta x\). This is the same fan-of-rays intuition in a different coordinate system: small angular errors near the origin widen into large terminal miss over long horizons.

The ray picture is especially useful because it decouples two ideas: (i) how far the system traveled in feature space (via \(\Delta x\)), and (ii) which parameter directions "tilt" the ray in that traveled direction (via \(v\)).

6. Worst-case Local Fragility Under Norm-bounded Perturbations

Suppose weight changes are constrained by \(\|\delta w\|_2\le \eta\). The maximal first-order change is achieved by choosing \(\delta w\) aligned with \(\Delta x\).

Corollary 1 (Worst-case local CAGR change).

For \(\|\delta w\|_2 \le \eta\), the first-order change obeys

\[ \left|\delta\widehat{\mathrm{CAGR}}\right| \;\lesssim\; \left(1+\widehat{\mathrm{CAGR}}(w)\right)\cdot \frac{\|\Delta x\|_2}{T}\cdot \eta, \]

where the implicit inequality becomes an equality at first order when \(\delta w\) is chosen parallel to \(\Delta x\).

This highlights a simple robustness proxy:

\[ \boxed{\text{long-horizon fragility} \;\propto\; \frac{\|\Delta x\|_2}{T}.} \]

Interpretation:

This is a local result. It is most useful as a diagnostic or design constraint: "Given that my features drifted by \(\Delta x\) over a decade, how stable is my CAGR estimate to tiny shifts in the model?"

7. What Can Be Computed Empirically

The sensitivity formulas are directly computable from a realized feature history:

  1. Compute \(\Delta x = x(T)-x(0)\).
  2. Compute \(\widehat{\mathrm{CAGR}}(w)=\exp((w^\top\Delta x)/T)-1\).
  3. Evaluate directional sensitivities \[ S_v := \left(1+\widehat{\mathrm{CAGR}}(w)\right)\cdot \frac{v^\top\Delta x}{T}. \] for perturbation directions \(v\) that represent realistic sources of model variation (e.g., retraining noise, regularization paths, parameter drift).
  4. Estimate worst-case local instability via \[ \left(1+\widehat{\mathrm{CAGR}}(w)\right)\cdot \frac{\|\Delta x\|_2}{T}. \]

This converts long-horizon forecast reliability into a measurable geometric quantity tied directly to empirical feature evolution.

8. Summary

Read as a compact principle: long-horizon instability is not a generic high-dimensional phenomenon. It is concentrated along the empirical displacement direction that the system actually traversed.