Stochastic Redistribution vs. LP Surrogates for Circuit Balancing
Abstract
We study circuit balancing when service points (SPs) exhibit stochastic load deviations that are separable in time and amplitude (rank‑1). The operator periodically redistributes SPs across circuits to minimize imbalance. We model SP deviation dynamics, derive aggregate circuit loads, and compare a discrete Redistribution objective (RMS risk) with a linear LP surrogate (L1 risk). Our main result shows that, under a global multiplicative factor driving deviations, the LP relaxation is exact, reducing a stochastic assignment problem to a convex program with integral optima. We also delineate when this equivalence breaks (true RMS objectives, capacities, topology) and how to extend it with MILP/MIQP formulations. A lognormal global factor is presented as a concrete statistical instantiation of this separable structure.
Overview of Contributions
- Rank‑1 separable dynamics: deviations factor as \( \delta_j(t)=\theta_j \varepsilon_t \), enabling a structural collapse.
- L2→L1 equivalence: RMS and L1 objectives differ by a positive scalar independent of the assignment, so minimizers coincide.
- Small exact LP: the time dimension factors out, yielding a circuit‑only LP whose extreme points are integral (one‑hot per SP).
1. Problem Setting (stochastic primitives)
Let circuits \(\mathcal{C}=\{C_1,\dots,C_m\}\) and service points \(\mathcal{P}=\{P_1,\dots,P_n\}\). Each SP \(P_j\) is assigned to one circuit:
For discrete time \(t=1,\dots,T\), actual and predicted loads satisfy
where \(\eta_j(t)\) is zero‑mean noise. We encode asymmetric stochastic spikes via
so most SPs are biased slightly negative, while a fat‑tailed minority spike positive more often.
Circuit aggregates (random processes) are
2. Stochastic Objectives & Risk (what we control)
Redistribution (RMS risk, discrete)
Interpretation. RMS emphasizes volatility/peaks and aligns with operational stress.
LP surrogate (L1 risk, convex)
Linearization with \(z_{it}\ge0\) gives
3. Complexity Tiers (tractability ladder)
- Tier 1 — Rank‑1 separable: Collapse to a small LP with integral extreme points ⇒ polynomial time, exact.
- Tier 2 — Rank‑K separability (small K): Reduce to a MISOCP in a K‑dimensional quadratic form (or a cohort piecewise rank‑1 model = coupled LPs + a small finishing MILP). Practical and scalable when K is small.
- Tier 3 — Fully general time‑varying \(\delta_j(t)\): Cross‑terms reintroduce combinatorial coupling ⇒ NP‑hard MIQP/MINLP; heuristics are typical.
4. Rank‑1 Structural Reduction and Lognormal Instantiation
4.1 Setup and Intuition
A single global multiplicative driver \(\varepsilon_t\) induces rank‑1 separability in time; per‑SP scale tuners \(\tau_j\) capture heterogeneity while preserving the collapse.
4.2 Theorem (L2→L1 Proportionality)
Define \(s_i(x):=\sum_j \tilde{\theta}_j x_{ij}\) and sample factors \(A_T:=\tfrac{1}{T}\sum_t \varepsilon_t\), \(B_T:=\sqrt{\tfrac{1}{T}\sum_t \varepsilon_t^2}\). Then
Analytic proof (concise)
With \(\Delta_i(t)=s_i(x)\varepsilon_t\), we have \(\sqrt{\tfrac{1}{T}\sum_t \Delta_i(t)^2}=|s_i(x)|B_T\) and \(\tfrac{1}{T}\sum_t|\Delta_i(t)|=|s_i(x)|A_T\). Summing over circuits yields the identities; the ratio is independent of \(x\), so minimizers coincide for any finite \(T\).
Geometric intuition (pathwise exactness)
Linearizing absolute values is tight at optimum with nonnegative weights. Minimizing a separable sum of absolute values over the assignment polytope (a product of simplices) admits an extreme‑point minimizer; extreme points are one‑hot per SP. Hence the LP surrogate is pathwise exact and returns a discrete redistribution.
4.3 Collapsed “Small” LP (circuit‑only)
Extreme‑point integrality. An optimal solution exists with each column one‑hot; thus this LP returns a discrete redistribution without rounding.
4.4 Implications
- Under rank‑1 multiplicative dynamics, RMS vs L1 differ by an uncontrollable scalar ⇒ identical minimizers.
- Per‑SP scale tuners preserve the collapse.
- Time factors out ⇒ near‑linear practical runtimes on the small LP.
5. Extensions and Break Conditions
- True RMS risk (nonlinear). \(J(x) = \sum_i \sqrt{\tfrac1T \sum_t (\sum_j x_{ij} \delta_j(t))^2}\). Fix: MIQP or MILP with piecewise‑linear RMS; greedy/local search with improvement guarantees.
- Capacity coupling. Constraints \(\sum_j x_{ij}p_j \le \kappa_i\) destroy TU; LP can be fractional. Fix: capacity‑aware MILP (valid inequalities or Lagrangian relaxation).
- Topology‑dependent dynamics. If moving \(P_j\) changes \(\delta_j(t)\), separability fails. Fix: mixed‑integer OPF or bilevel formulations coupling assignments to power‑flow states.
Rule of thumb. Use the LP under absolute deviations with no capacity coupling; upgrade to MILP/MIQP when RMS and/or capacities/topology matter.
6. Practical Metrics & Evaluation (stochastic reporting)
Interpretation. Minimizing \(J\) suppresses persistent volatility at the feeder level; minimizing \(\tilde J\) suppresses large absolute swings and is robust to outliers. Report both per trajectory and summarize across Monte‑Carlo draws (e.g., expectations/quantiles, CVaR).
7. Conclusion and Future Directions
A rank‑1 separable structure turns a stochastic assignment with RMS risk into an exact small LP under an L1 surrogate with identical minimizers. This structural shortcut explains exponential → polynomial → near‑linear behavior in practice.
Future directions. (i) variance bounds under lognormal asymmetry; (ii) capacity‑aware MILP with CVaR‑based stochastic regularization; (iii) topology‑aware mixed‑integer OPF that couples redistribution to volt/VAR limits.
Appendix A. Interactive 3D Demo (Latent a–p–d Space)
Load Rebalancing DemoWe provide an interactive visualization that animates the redistribution process on a single realized sample path. Service points (SPs) appear as glowing spheres labeled with their per–SP statistics—actual a, prediction p, and deviation d—and are grouped inside semi‑opaque circuit spheres. The animation proceeds in two phases: the first half shows the original distribution; at the midpoint, points smoothly transition to their optimized circuits (with a progress bar indicating time). Circuit spheres fade out and back in to highlight the redistribution event, and the overlay displays per‑circuit risks and aggregate objective values.
A.1 Mapping and Visual Design
Let circuits be \(\mathcal{C}=\{C_1,\dots,C_m\}\) and SPs \(\mathcal{P}=\{P_1,\dots,P_n\}\) with assignments \(x_{ij}\in\{0,1\}\), \(\sum_i x_{ij}=1\). For each SP \(P_j\), the demo displays the pair \((a_j, p_j)\) and the residual \(d_j\) (sign convention consistent with the paper), and places a labeled sphere for \(P_j\) inside the sphere of its assigned circuit.
A.2 Objective and Per‑Circuit Risk
The demo’s text panel summarizes the risk the operator seeks to minimize. For a realized trajectory \(\{\delta_j(t)\}_{t=1}^T\), circuit‑level RMS risk and the aggregate objective are
A.3 Animation Procedure
- Initialization (original assignment). Points are positioned within their circuits and labeled with a, p, d and an SP identifier.
- Midpoint transition. At \(t=T/2\), each point interpolates from its “before” circuit center to its “after” circuit center over a short window, while circuit spheres dissolve and reappear to emphasize the segmentation change.
- Optimized phase. Points remain stationary in their new circuits; the overlay updates to show per‑circuit \(R_i\) and aggregate \(J\) for the redistributed configuration. The timeline then resets and loops.
A.4 Relation to the Model (and Deviations)
- Objective consistency. The quantities reported in the panel correspond to the paper’s RMS functional
\(J(x)=\sum_i R_i(x)\) computed on a realized sample path. The demo can be wired directly to the implementation of
compute_Jfor the actual dataset so that “before/after” values reflect the true assignments. - LP surrogate vs. RMS. The write‑up establishes an L1 (LP) surrogate \(\tilde J(x)=\sum_{i,t} w_{it}\big|\sum_j x_{ij}\delta_j(t)\big|\) and shows pathwise exactness under stated assumptions. The visual narrative mentions this surrogate but renders the RMS objective in the panel; no LP geometry is visualized.
- Capacity constraint. For demonstration, we enforce a practical cardinality cap (e.g., \(\le 5\) SPs per circuit) and deploy a local improvement routine (hill‑climb with swaps). This departs from the assumption “no capacity coupling,” under which the LP relaxation is integral; hence, the demo illustrates a realistic, capacity‑aware variant where the LP‑integrality guarantee does not strictly apply.
- Spatial encoding. The on‑screen coordinates serve cluster legibility—points are kept inside circuit spheres using fixed offsets. Thus, the spatial layout is symbolic rather than an isometric embedding of \((a,p,d)\). The values of \(a,p,d\) are shown in the labels; geometry encodes circuit membership and the temporal change of assignment, not a metric in \(\mathbb{R}^3\).
- Single‑trajectory view. The animation is sample‑path–level (one realization of \(\{\delta_j(t)\}\)); it does not animate time series within a phase. This aligns with the pathwise interpretation of the objectives used in the paper’s formulation.
A.5 Interpretation
Visually, the operator’s action is to redistribute which deviations sum together, reshaping each circuit’s aggregate process \(\Delta_i(t)=\sum_j x_{ij}\delta_j(t)\) and, consequently, its risk \(R_i\). A successful redistribution brings the post‑transition points into a configuration whose reported \(J\) is lower than before, indicating reduced circuit stress on the realized path (RMS) or reduced absolute swings under the surrogate (L1).
A.6 Practical Notes
- Data wiring. Replace the panel’s placeholders with the computed before/after values of \(R_i\) and
\(J\) from the dataset (
compute_Jon the original and optimized assignments). - Embedding variant. If desired, map \((a,p,d)\) into a true 3D embedding (e.g., affine‑scaled axes or PCA on \([a,p,d]\)) and place points at those coordinates within circuit spheres; labels remain unchanged.
- Scalability. For larger \(n\), fade labels at distance or show them on hover; the risk panel still conveys the effect of redistribution succinctly.