Skip to content

A Faster Deterministic Algorithm for Fully Dynamic Maximal Matching

Source: arXiv:2605.00797 · Published 2026-05-01 · By Julia Chuzhoy, Sanjeev Khanna, Junkai Song

TL;DR

This paper studies the fully dynamic maximal matching problem against an adaptive adversary, where the update sequence can depend on the algorithm’s past behavior. That setting is substantially harder than the oblivious-adversary case, where randomized polylogarithmic or even constant-amortized algorithms have long been known. The authors’ main contribution is a deterministic algorithm with amortized update time n^{1/2+o(1)}, improving the prior deterministic adaptive-adversary bound of \tilde{O}(n^{8/9}) from BBKS25 and matching the same sublinear barrier that randomized methods only recently reached.

The key conceptual change is a new deterministic combinatorial framework called a subgraph system. Unlike EDCS, which was designed as a matching sparsifier for approximate matchings and then repurposed for maximal matching in BBKS25, the subgraph system is built specifically to support verification and repair of maximality. The paper’s proof strategy is recursive: it first presents a simpler decremental algorithm with \tilde{O}(n^{2/3}) amortized update time to motivate the core ideas, then builds a multi-level subgraph system that can be refined repeatedly to obtain the final n^{1/2+o(1)} deterministic bound.

Key findings

  • Main theorem: a deterministic fully dynamic maximal matching algorithm with amortized update time n^{1/2+o(1)} against an adaptive adversary (Theorem 1.1).
  • Prior deterministic adaptive-adversary bound from BBKS25 was \tilde{O}(n^{8/9}); this paper improves it to n^{1/2+o(1)}.
  • The paper gives a simpler decremental algorithm with \tilde{O}(n^{2/3}) amortized update time as an intermediate stepping stone to the final result.
  • The core new object is a z-subgraph system with a partition (A,B,U) and edge set M, where every vertex in S=A\cup B has exactly z incident edges in M, every u\in U has at most z incident edges in M, and |N(u)\cap U|\le z.
  • A phase-based strategy recomputes the subgraph system from scratch every r=\lfloor n^{4/3}\rfloor updates using z=\lfloor n^{2/3}\rfloor, so the construction cost \tilde{O}(n^2) amortizes to \tilde{O}(n^{2/3}) per update in the simplified setting.
  • For the phase algorithm, the authors edge-color M into z+1 matchings and maintain one matching M1 with all but at most \tau=32r/z unmatched vertices in S at subphase boundaries; during a subphase they preserve the invariant that at most 2\tau=64r/z vertices of S are unmatched in M1.
  • The paper states that the algorithm uses the almost-linear-time deterministic edge-coloring algorithm of ABB+26 to partition M into z+1 disjoint matchings.
  • The authors emphasize that the subgraph system is designed for efficient recursive refinement, in contrast to EDCS which was designed for approximate matching and then adapted in BBKS25.

Threat model

The adversary is adaptive and observes the algorithm’s behavior, then chooses future edge insertions/deletions to stress or invalidate the maintained maximal matching. The algorithm must withstand this online choice of updates without assuming randomness hides its current state. The adversary is not allowed to inspect internal randomness in a way that breaks the deterministic setting because the algorithm itself is deterministic here; there is also no mention of adversarial tampering with memory, only graph updates.

Methodology — deep read

The problem setting is fully dynamic maximal matching on an n-vertex graph under an online sequence of edge insertions and deletions, against an adaptive adversary who may tailor future updates based on the algorithm’s past actions. The authors explicitly contrast this with the oblivious-adversary model, where the update sequence is fixed in advance and randomization can hide the current matching structure. They do not rely on any cryptographic or computational hardness assumptions; the challenge is purely algorithmic. The algorithm must maintain a matching that is maximal with respect to the entire current graph at all times, which makes even single-edge insertions/deletions potentially dangerous because they can invalidate maximality immediately.

The paper’s data-structural backbone is the z-subgraph system. For a graph G and integer z, it consists of an edge set M and a partition of vertices into A, B, U with S=A\cup B. Vertices in S have exactly z incident edges in M; vertices in U have at most z incident edges in M. Two additional properties are crucial: P1 requires that every u\in U has at most 2z neighbors in B, and P2 requires that every edge of M incident to a\in A stays inside S. The system also stores neighborhood lists: for each u\in U, a short list \Lambda(u)=N(u)\cap(B\cup U) of size O(z), and for each a\in A, a potentially long list L(a)=N(a)\cap U. The paper first explains a straightforward construction: greedily build a maximal edge set M with per-vertex incidence at most z, place degree-exactly-z vertices into S, separate A from B according to whether their M-edges remain inside S, and then repair P1 by processing U-vertices and moving them into B when necessary. The text says this construction is doable in \tilde{O}(|V|+|E|) time, but does not give a fully spelled-out low-level implementation in the excerpt beyond the greedy/repair outline.

For the more detailed decremental phase algorithm, the paper chooses z=\lfloor n^{2/3}\rfloor and breaks the update stream into phases of length r=\lfloor n^{4/3}\rfloor. At the beginning of each phase, it recomputes the subgraph system from scratch in \tilde{O}(|E|+|V|) \le \tilde{O}(n^2), which amortizes to \tilde{O}(n^{2/3}) per update over the phase length. Then it edge-colors the edge set M into z+1 disjoint matchings M_1,\dots,M_{z+1} using the deterministic almost-linear-time edge-coloring algorithm of ABB+26. Because every vertex in S has degree exactly z in M, each vertex is unmatched in at most one of these color classes. The algorithm selects one class, say M1, to be the “active” matching that it may further modify; the other matchings are only affected by adversarial deletions. A counting argument in the paper shows that, at all times, some i\in{2,\dots,z+1} leaves at most \tau=32r/z vertices of S unmatched, and the active class is repaired so that at subphase boundaries M1 itself also has only \tau unmatched vertices in S.

The maintenance logic is built around a notion of a vertex being “settled”: a vertex is settled if it is matched, or all of its neighbors are matched. Maximality is equivalent to every vertex being settled. The algorithm maintains a matching M^* that always contains M1 and is maximal in the current graph. When an edge in M^* is deleted by the adversary, the algorithm attempts to rematch its endpoints. Vertices in U are easy: because each u has only O(z) neighbors in U\cup B via \Lambda(u), and because the set \hat{S} of currently unmatched vertices in S is kept small (|\hat{S}|\le O(r/z)), rematching a U-vertex takes \tilde{O}(z+r/z) time by scanning the short lists and \hat{S}. Vertices in B are also manageable by maintaining a directed auxiliary graph H on B\cup U: whenever a u\in U becomes unmatched, the algorithm inserts all arcs from u to \Lambda(u) into H; then if a b\in B becomes unmatched, it checks for incoming arcs from an unmatched u and uses them to restore a match, otherwise it scans \hat{S}. This gives \tilde{O}(z) maintenance per status change of a U-vertex plus an extra O(r/z) scan when needed.

The hard case is rematching A-vertices, because they may have arbitrarily many neighbors in U and vice versa. The paper’s key trick is to give A-vertices priority by allowing one controlled deletion from M^* if that deletion only affects B\cup U, which are then repaired cheaply. Since at most 2\tau vertices of U can be matched to A at any time, an A-vertex can either be matched to one of the first O(r/z) candidates in L(a), or else the paper infers that a suitable U-neighbor exists among those candidates. In that case, it inserts the edge (a,u) into M^* and, if needed, deletes the existing edge incident to u, then recursively repairs the endpoint in B\cup U. This is the mechanism that keeps the amortized per-update cost at \tilde{O}(z+r/z) in the decremental setting. Plugging in z=\Theta(n^{2/3}) and r=\Theta(n^{4/3}) yields \tilde{O}(n^{2/3}) update time for the simplified algorithm.

For the final theorem, the paper moves from the single-level z-subgraph system to a multi-level subgraph system. The excerpt indicates that Section 4 and Section 6 implement recursive construction/refinement: an initial subgraph system is built, then transformed from h-level to (h+1)-level, with type-1 and type-2 phases used to control the recursion. The high-level claim is that this recursive refinement strengthens the structural guarantees enough to lower the amortized update time from \tilde{O}(n^{2/3}) to n^{1/2+o(1)}. The excerpt does not include the precise recurrence, the exact number of levels, or the full parameter schedule, so those details are not reconstructible from the provided text alone. The paper also does not report an experimental training regime, because this is a combinatorial dynamic graph algorithm paper rather than an empirical ML system; there is no notion of epochs, seeds, or GPU hardware. Reproducibility-wise, the text excerpt does not mention released code or frozen implementation artifacts, so that remains unclear from the source given.

Technical innovations

  • Introduces the z-subgraph system, a maximality-oriented alternative to EDCS with exact degree z on S, bounded degree into U, and neighborhood lists tailored for repair.
  • Uses edge coloring to decompose the sparse subgraph M into z+1 matchings and maintain one active matching M1 with only O(r/z) unmatched S-vertices at subphase boundaries.
  • Develops a priority-based repair routine for A-vertices that allows a controlled deletion from the current matching only when the affected endpoint lies in B\cup U, which can be repaired efficiently.
  • Builds a recursive multi-level subgraph-system framework that is explicitly designed for iterative parameter improvement, yielding the n^{1/2+o(1)} bound instead of the simpler \tilde{O}(n^{2/3}) regime.

Baselines vs proposed

  • BBKS25 deterministic adaptive-adversary algorithm: amortized update time = \tilde{O}(n^{8/9}) vs proposed: n^
  • BBKS25 randomized adaptive-adversary algorithm: amortized update time = \tilde{O}(n^{3/4}) vs proposed: n^
  • Simple decremental subgraph-system algorithm in this paper: amortized update time = \tilde{O}(n^{2/3}) vs proposed final algorithm: n^

Limitations

  • The provided excerpt gives only a high-level description of the multi-level construction; the exact recursive parameter choices and invariants are not fully visible here.
  • No experimental evaluation is reported; the evidence is entirely theoretical.
  • The construction cost of the z-subgraph system is only stated as \tilde{O}(|V|+|E|); the excerpt does not expose implementation-level constants or practical viability.
  • The paper’s final bound is n^{1/2+o(1)}, so the hidden subpolynomial factor may still be significant in practice.
  • The algorithm is designed for maximal matching, which is a fragile objective; the approach may not transfer directly to approximate matching or other dynamic problems without major changes.
  • Reproducibility details such as code release, exact implementation, or frozen artifacts are not present in the excerpt.

Open questions / follow-ons

  • Can the subgraph-system framework be adapted to other symmetry-breaking problems such as maximal independent set or dynamic coloring against adaptive adversaries?
  • Is there a practical implementation of the multi-level recursion, or are the constants/higher-order terms too large for real graphs?
  • Can the approach be generalized to stronger dynamic objectives, such as maintaining near-maximality under additional constraints or weighted matchings?
  • Is there a lower bound matching n^{1/2+o(1)} for deterministic adaptive fully dynamic maximal matching, or is more room left?

Why it matters for bot defense

For bot-defense practitioners, the main lesson is methodological rather than directly deployable: the paper is about sustaining a fragile invariant under an adaptive, observant adversary. That maps well onto CAPTCHA and anti-abuse systems, where the attacker can adapt to what the defense reveals and can probe for weak spots. The subgraph-system idea is an example of building a maintenance structure specifically for verification/repair rather than trying to retrofit a structure optimized for a different objective.

A CAPTCHA engineer could take away the design principle that under adaptive pressure, the right abstraction may be a layered repair mechanism with explicit short lists, bounded local obligations, and periodic recomputation, instead of a single monolithic classifier or rule set. The paper also underscores a common anti-bot reality: if the defense objective is brittle, then even one bad interaction can invalidate the guarantee, so you want invariants that are cheap to verify locally and cheap to restore after targeted disruptions.

Cite

bibtex
@article{arxiv2605_00797,
  title={ A Faster Deterministic Algorithm for Fully Dynamic Maximal Matching },
  author={ Julia Chuzhoy and Sanjeev Khanna and Junkai Song },
  journal={arXiv preprint arXiv:2605.00797},
  year={ 2026 },
  url={https://arxiv.org/abs/2605.00797}
}

Read the full paper

Articles are CC BY 4.0 — feel free to quote with attribution