LiDMaS+ Math Notes¶
This page is the docs-site version of the LiDMaS+ lecture/math derivation, aligned with the current implementation.
Primary source files:
math.texlecture.tex
Lecture objective¶
LiDMaS+ follows one mathematical chain:
- Noise model generates physical errors.
- Parity-check algebra maps errors to syndrome.
- Decoder solves an optimization/inference problem to produce correction.
- Residual logical predicate produces Bernoulli trial outcome.
- Monte Carlo aggregation estimates logical error rate and threshold behavior.
If any link in the chain changes, threshold curves change.
1) Binary algebra and surface-code maps¶
All parity operations are over \( \mathbb{F}_2 = \{0,1\} \), with XOR:
For parity-check matrix \(H\) and error vector \(e\):
For implemented planar surface code (odd distance \(d \ge 3\)):
With \(H_X, H_Z\), X/Z error sectors \(e_X,e_Z\), and syndromes \(s_X,s_Z\):
2) Logical failure variable¶
Let \(L_X,L_Z\) be canonical logical supports and
After correction \(c\), residual is \(e^{\mathrm{res}} = e \oplus c\).
LiDMaS+ threshold accounting uses mode-specific logical indicators:
- Pauli/hybrid harness:
- Native GKP harness:
3) Noise models used in threshold runs¶
Pauli mode¶
Independent Bernoulli draws per qubit:
Hybrid CV-discrete mode¶
Legacy hybrid mode maps CV disturbances to discrete syndromes, then uses the same decoder/statistics path.
Native GKP mode¶
Continuous displacements:
Digitization:
Additional gate/idle/measurement/loss channels are injected before logical evaluation.
4) Decoder objectives¶
MWPM¶
Defect-pair base distance:
Uniform cost \(W_{ij}=d_{ij}\), or weighted:
Solve perfect matching objective:
Lift matches to correction \(c\), with syndrome consistency check \(Hc=s\bmod 2\).
Union-Find + peeling¶
Operationally:
- Initialize odd clusters at defects.
- Grow and merge clusters.
- Connect unresolved odd clusters to boundaries.
- Build forest and peel leaves to select correction edges.
BP (shared CSS/LDPC inference path)¶
For BSC \(p \in (0,1/2)\):
with sum-product or normalized-min-sum check-node updates and posterior hard decisions.
Neural-guided MWPM weights¶
Feature vector:
Scale model:
5) Monte Carlo estimators and confidence intervals¶
At one sweep point with \(N\) trials and \(k\) logical failures:
LiDMaS+ reports Wilson 95% interval (\(z=1.9599639845\)):
6) Threshold extraction¶
Crossing estimates¶
- Hybrid sigma-grid nearest difference:
- Pauli sign-change interpolation:
Finite-size scaling collapse¶
with LiDMaS+ bin-variance objective:
7) End-to-end trial in one line¶
Teaching tip for live demos¶
When presenting to graduate students, use this sequence:
- Start from \(H e = s \bmod 2\) and explain parity detection.
- Show how each decoder defines a surrogate inverse problem.
- Explain that threshold plots are statistical estimators of logical failure probability, not direct physical constants.
- End by contrasting Pauli and GKP/hybrid assumptions while keeping the same inference backbone.