Skip to content

Performance Benchmark

A simple linear chain that measures pure overhead of each library’s reactive primitives:

  • Loop: update counter synchronously from 11 to NN (N=105N = 10^5).
  • Verify: final value =N×4= N \times 4.
  • No diamond dependency — pure propagation overhead comparison.
LibraryMedian (ms)Min (ms)Max (ms)p95 (ms)Ops/sec
SynState13.3910.2820.0419.627,465,632
RxJS3.793.674.124.1126,392,839
MobX89.3688.4890.7490.141,119,054
Jotai392.07374.80426.35420.00255,059
Redux202.22191.80251.83242.93494,500
Zustand2.962.863.743.3833,751,925
Valtio22.0421.2522.8222.824,537,372

Each library implements the same derived chain pattern. All implementations are runnable as vitest tests.

export const runBenchmark = (n: number): number => {
const [counter, setCounter] = createState(0);
const doubled = counter.pipe(map((x) => x * 2));
const quadrupled = doubled.pipe(map((x) => x * 2));
let mut_lastValue = 0;
const subscription = quadrupled.subscribe((v) => {
mut_lastValue = v;
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
setCounter(mut_i);
}
subscription.unsubscribe();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const counter = new BehaviorSubject(0);
const doubled = counter.pipe(map((x) => x * 2));
const quadrupled = doubled.pipe(map((x) => x * 2));
let mut_lastValue = 0;
const subscription = quadrupled.subscribe((v) => {
mut_lastValue = v;
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
counter.next(mut_i);
}
subscription.unsubscribe();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const state = observable({ counter: 0 });
const doubled = computed(() => state.counter * 2);
const quadrupled = computed(() => doubled.get() * 2);
let mut_lastValue = 0;
const dispose = reaction(
() => quadrupled.get(),
(value) => {
mut_lastValue = value;
},
{ fireImmediately: true },
);
for (let mut_i = 1; mut_i <= n; mut_i++) {
runInAction(() => {
state.counter = mut_i;
});
}
dispose();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const counterAtom = atom(0);
const doubledAtom = atom((get) => get(counterAtom) * 2);
const quadrupledAtom = atom((get) => get(doubledAtom) * 2);
const store = createStore();
let mut_lastValue = store.get(quadrupledAtom);
const unsub = store.sub(quadrupledAtom, () => {
mut_lastValue = store.get(quadrupledAtom);
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
store.set(counterAtom, mut_i);
}
unsub();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const counterSlice = createSlice({
name: 'counter',
initialState: { value: 0 },
reducers: {
set: (state, action: Readonly<{ payload: number }>) => {
state.value = action.payload;
},
},
});
const store = configureStore({
reducer: counterSlice.reducer,
middleware: () => new Tuple(),
});
// eslint-disable-next-line unicorn/consistent-function-scoping
const selectCounter = (state: Readonly<{ value: number }>): number =>
state.value;
const selectDoubled = createSelector(
selectCounter,
(counter) => counter * 2,
);
const selectQuadrupled = createSelector(
selectDoubled,
(doubled) => doubled * 2,
);
let mut_lastValue = selectQuadrupled(store.getState());
store.subscribe(() => {
mut_lastValue = selectQuadrupled(store.getState());
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
store.dispatch(counterSlice.actions.set(mut_i));
}
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const store = createStore<Readonly<{ counter: number }>>()(() => ({
counter: 0,
}));
const selectQuadrupled = (state: Readonly<{ counter: number }>): number =>
state.counter * 2 * 2;
let mut_lastValue = selectQuadrupled(store.getState());
store.subscribe((state) => {
mut_lastValue = selectQuadrupled(state);
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
store.setState({ counter: mut_i });
}
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const state = proxy({ counter: 0 });
let mut_lastValue = state.counter * 2 * 2;
const unsubscribe = subscribe(
state,
() => {
mut_lastValue = state.counter * 2 * 2;
},
true,
);
for (let mut_i = 1; mut_i <= n; mut_i++) {
state.counter = mut_i;
}
unsubscribe();
return mut_lastValue;
};

A diamond-shaped dependency graph that tests how each library handles multiple derived values merging back into one:

  • Loop: update counter synchronously from 11 to NN (N=105N = 10^5).
  • Verify: final value =N×5= N \times 5.
  • Zustand is excluded because it has no mechanism to combine separate derived values.
LibraryMedian (ms)Min (ms)Max (ms)p95 (ms)Ops/sec
SynState19.5417.9325.7922.635,118,062
RxJS9.318.869.689.5710,737,673
MobX99.9098.57101.49101.281,000,966
Jotai550.21543.65560.05557.63181,750
Redux317.42289.68362.50339.75315,041
export const runBenchmark = (n: number): number => {
const [counter, setCounter] = createState(0);
const doubled = counter.pipe(map((x) => x * 2));
const tripled = counter.pipe(map((x) => x * 3));
const sum = combine([doubled, tripled]).pipe(map(([d, t]) => d + t));
let mut_lastValue = 0;
const subscription = sum.subscribe((v) => {
mut_lastValue = v;
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
setCounter(mut_i);
}
subscription.unsubscribe();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const counter = new BehaviorSubject(0);
const doubled = counter.pipe(map((x) => x * 2));
const tripled = counter.pipe(map((x) => x * 3));
const sum = combineLatest([doubled, tripled]).pipe(map(([d, t]) => d + t));
let mut_lastValue = 0;
const subscription = sum.subscribe((v) => {
mut_lastValue = v;
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
counter.next(mut_i);
}
subscription.unsubscribe();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const state = observable({ counter: 0 });
const doubled = computed(() => state.counter * 2);
const tripled = computed(() => state.counter * 3);
const sum = computed(() => doubled.get() + tripled.get());
let mut_lastValue = 0;
const dispose = reaction(
() => sum.get(),
(value) => {
mut_lastValue = value;
},
{ fireImmediately: true },
);
for (let mut_i = 1; mut_i <= n; mut_i++) {
runInAction(() => {
state.counter = mut_i;
});
}
dispose();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const counterAtom = atom(0);
const doubledAtom = atom((get) => get(counterAtom) * 2);
const tripledAtom = atom((get) => get(counterAtom) * 3);
const sumAtom = atom((get) => get(doubledAtom) + get(tripledAtom));
const store = createStore();
let mut_lastValue = store.get(sumAtom);
const unsub = store.sub(sumAtom, () => {
mut_lastValue = store.get(sumAtom);
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
store.set(counterAtom, mut_i);
}
unsub();
return mut_lastValue;
};
export const runBenchmark = (n: number): number => {
const counterSlice = createSlice({
name: 'counter',
initialState: { value: 0 },
reducers: {
set: (state, action: Readonly<{ payload: number }>) => {
state.value = action.payload;
},
},
});
const store = configureStore({
reducer: counterSlice.reducer,
middleware: () => new Tuple(),
});
// eslint-disable-next-line unicorn/consistent-function-scoping
const selectCounter = (state: Readonly<{ value: number }>): number =>
state.value;
const selectDoubled = createSelector(
selectCounter,
(counter) => counter * 2,
);
const selectTripled = createSelector(
selectCounter,
(counter) => counter * 3,
);
const selectSum = createSelector(
selectDoubled,
selectTripled,
(doubled, tripled) => doubled + tripled,
);
let mut_lastValue = selectSum(store.getState());
store.subscribe(() => {
mut_lastValue = selectSum(store.getState());
});
for (let mut_i = 1; mut_i <= n; mut_i++) {
store.dispatch(counterSlice.actions.set(mut_i));
}
return mut_lastValue;
};

The first two scenarios use fixed-size graphs. This scenario matches the Throughput interactive demo: a depth-MM scan chain where each stage lerps toward the previous stage, with a combine that reads all M+1M+1 outputs. KK source updates are pushed synchronously per measurement — simulating KK updates per animation frame.

The following diagram shows the graph structure at M=3M = 3. In the actual benchmark, MM ranges from 5050 to 200200:

  • KK (updates per measurement): 100100, 500500, 10001000.
  • MM (chain depth): 5050, 100100, 200200.
  • Zustand, Valtio, and Redux are excluded (no mechanism to express a stateful scan chain with independent derived values).
  • Measurements exceeding 5,000 ms are aborted.
LibraryK=100, M=50K=500, M=50K=1000, M=50K=100, M=100K=500, M=100K=1000, M=100K=500, M=200K=1000, M=200
SynState0.4 ms1.0 ms1.5 ms0.3 ms1.6 ms2.8 ms2.9 ms5.6 ms
RxJS3.3 ms14.6 ms31.1 ms14.2 ms66.7 ms135.0 ms335.3 ms678.4 ms
Jotai9.2 ms46.5 ms91.3 ms17.2 ms89.6 ms182.9 ms194.5 ms398.5 ms
MobX18.9 ms79.4 ms156.2 ms30.0 ms150.1 ms289.9 ms289.8 ms575.2 ms
  • SynState scales linearly with both KK and MM — at K=1000K=1000, M=200M=200, it finishes in 5.6 ms. Its push-based direct function calls have negligible per-stage overhead.
  • RxJS takes 678 ms at K=1000K=1000, M=200M=200121× slower than SynState. As explained above, this is not a constant-factor difference but a quadratic O(M2)O(M^2) asymptotic cost from combineLatest’s redundant emissions. The same blowup is visible in the interactive demo.
  • Jotai takes 399 ms at K=1000K=1000, M=200M=20071× slower than SynState — due to selectAtom’s per-update overhead (epoch-based recomputation and dynamic dependency tracking on every propagation step).
  • MobX takes 575 ms at K=1000K=1000, M=200M=200103× slower than SynState. This scenario requires scan (stateful accumulation: each stage needs its own previous value plus the new input). MobX’s computed cannot express this because it is a pure derivation with no access to its own previous output. The only option is observable.box + reaction (push-based), which incurs per-stage overhead from proxy tracking, dependency notification, and autorun re-reads. In contrast, the cascaded diamond scenario uses stateless derivations that computed can handle efficiently via lazy evaluation.
  • At K=500K=500, M=100M=100 — a realistic scenario for 60fps animation with moderate graph complexity — SynState (1.6 ms) comfortably fits within the 16ms frame time, while RxJS (67 ms), Jotai (90 ms), and MobX (150 ms) all far exceed it.

This scenario demonstrates the exponential blowup of RxJS’s combineLatest when diamonds are chained in series. Each stage splits the previous output into two branches (fan-out D=2D = 2) and recombines them, creating a chain of NN binary diamonds:

The diagram shows N=3N = 3. In the benchmark, NN ranges from 22 to 2020.

  • KK (source updates): 100100.
  • NN (cascade depth): 2,4,6,8,10,12,14,16,18,202, 4, 6, 8, 10, 12, 14, 16, 18, 20.
  • Measurements exceeding 5,000 ms are aborted.
LibraryN=2N=4N=6N=8N=10N=12N=14N=16N=18N=20
SynState0.5 ms0.6 ms0.5 ms0.5 ms0.8 ms1.9 ms6.2 ms23.1 ms90.8 ms359.2 ms
RxJS0.2 ms0.5 ms2.1 ms11.1 ms60.2 ms373.9 ms2018.1 ms> 5000 ms> 5000 ms> 5000 ms
Jotai1.5 ms1.9 ms2.5 ms5.3 ms7.2 ms13.0 ms35.9 ms116.4 ms414.5 ms1666.5 ms
MobX1.0 ms0.6 ms0.9 ms0.8 ms0.9 ms0.9 ms1.0 ms1.1 ms1.2 ms1.3 ms
  • RxJS exhibits classic O(2N)O(2^N) exponential growth (binary fan-out). It times out at N=16N=16 (216=655362^{16} = 65536 emissions per source update ×100\times 100 updates). At N=14N=14, it already takes 2,018 ms — over 300× slower than SynState (6.2 ms).
  • MobX stays under 1.3 ms even at N=20N=20, thanks to its lazy computed evaluation. MobX’s computed values are not eagerly re-evaluated when their dependencies change — they are only recomputed when read. The reaction reads only the final computed, which triggers a single lazy pass through the chain. Each computed in the chain is evaluated exactly once per update, yielding O(N)O(N) total work with minimal overhead. This contrasts sharply with the deep chain throughput scenario where MobX is 103× slower than SynState — because that scenario uses a chain of reactions (push-based), not computeds (pull-based). The performance characteristics depend heavily on which MobX primitive is used.
  • SynState propagates in O(N)O(N) via depth-ordered traversal, but the benchmark includes graph construction cost in each runBenchmark call (creating 3N+13N+1 observables with binary search insertion into propagationOrder). This construction overhead grows with NN and dominates at large depths. At N=20N=20, SynState takes 359 ms. In applications where the graph is constructed once and updated many times, the per-update cost is O(N)O(N) as shown in the deep chain throughput scenario.
  • Jotai grows faster than O(N)O(N) due to its per-store.set() DFS overhead compounding with the number of derived atoms. At N=20N=20, it takes 1,667 ms.

The previous scenarios test propagation throughput — every source update reaches the subscriber. This scenario tests a different pattern: the cost of updating an observable that is not in the subscription set — the case where dynamic dependency tracking is most advantageous.

The graph has B branches and a selector. The selector is fixed at 00 throughout the benchmark, so only branch[0] is “active”. The benchmark updates branch[1] (an inactive branch) K=105K = 10^5 times.

The key difference between libraries is whether branch[1] has any observers (subscribers):

  • Static subscriptions (SynState, RxJS): combine subscribes to all branches at construction time, so branch[1] has an observer. When branch[1] is updated, combine fires, map produces the same value as before, and equality comparison prevents downstream propagation. Cost scales with B because the combined array grows.
  • Dynamic subscriptions (Jotai, MobX): The computed or derived atom only subscribes to observables that were actually accessed via .get() during execution. When selector = 0, only branch[0].get() is called, so branch[1]’s observer list is empty. Updating branch[1] has nothing to notify — nothing happens.
LibraryB=2B=5B=10B=20B=50B=100B=200B=500B=1000
SynState24.2 ms24.2 ms33.7 ms48.1 ms90.8 ms154.0 ms297.4 ms703.3 ms1466.0 ms
RxJS5.6 ms5.3 ms5.1 ms5.4 ms6.1 ms9.2 ms22.3 ms37.2 ms61.4 ms
Jotai69.6 ms69.8 ms73.7 ms68.8 ms72.1 ms69.0 ms70.5 ms72.5 ms69.9 ms
MobX2.3 ms2.5 ms2.4 ms2.4 ms2.4 ms2.4 ms2.4 ms2.4 ms2.7 ms
  • MobX is the fastest in this scenario. computed(() => branches[selector.get()].get()) subscribes only to selector and branches[0] at runtime, so branches[1]’s observer list is empty. Updating branches[1] has no one to notify — the cost is constant at ~2.4 ms regardless of B.

  • SynState’s cost scales linearly with B because combine subscribes to all branches at construction time, so any branch update fires the combine. The linear trend is clearly visible in the chart: from 24 ms at B=2 to 1,466 ms at B=1000 (~1.5μs per update per branch).

  • RxJS uses the same static combineLatest approach. Its cost also scales with B but more slowly than SynState, because combineLatest does not perform equality comparison — it always emits, and the subscriber does minimal work (assigning a value).

  • Jotai uses dynamic subscriptions like MobX — resultAtom does not subscribe to the inactive branch, so no recomputation is triggered. However, store.set() itself performs bookkeeping operations (invalidation checks) inside the atom store, producing a constant overhead of ~70 ms regardless of B. This makes Jotai slower than SynState for B≤50.

export const runBenchmark = (k: number, branchCount: number): number => {
const [selector] = createState(0);
const branches: DeepReadonly<
{
source: Observable<number>;
set: (v: number) => void;
}[]
> = Arr.zeros(asUint32(branchCount)).map(() => {
const [source, setSource] = createState(0);
return { source, set: setSource };
});
const allSources = [
selector,
...branches.map((b) => b.source),
] as const satisfies NonEmptyArray<Observable<number>>;
const result = combine(allSources).pipe(
map(
([selectorValue, ...branchesValue]) =>
branchesValue[selectorValue] ?? 0,
),
);
let mut_lastValue = 0;
const subscription = result.subscribe((v) => {
mut_lastValue = v;
});
// Update an INACTIVE branch (branch 1, while selector = 0)
const inactiveBranch = branches[1];
if (inactiveBranch === undefined) {
throw new Error('need at least 2 branches');
}
for (let mut_i = 1; mut_i <= k; mut_i++) {
inactiveBranch.set(mut_i);
}
subscription.unsubscribe();
return mut_lastValue;
};
export const runBenchmark = (k: number, branchCount: number): number => {
const selector = new BehaviorSubject(0);
const branches: readonly BehaviorSubject<number>[] = Arr.zeros(
asUint32(branchCount),
).map(() => new BehaviorSubject(0));
const result = combineLatest([selector, ...branches]).pipe(
map((values) => {
const sel = values[0];
return values[sel + 1] ?? 0;
}),
);
let mut_lastValue = 0;
const subscription = result.subscribe((v) => {
mut_lastValue = v;
});
// Update an INACTIVE branch (branch 1, while selector = 0)
const inactiveBranch = branches[1];
if (inactiveBranch === undefined) {
throw new Error('need at least 2 branches');
}
for (let mut_i = 1; mut_i <= k; mut_i++) {
inactiveBranch.next(mut_i);
}
subscription.unsubscribe();
return mut_lastValue;
};
export const runBenchmark = (k: number, branchCount: number): number => {
const selector = observable.box(0);
const branches: readonly ReturnType<typeof observable.box<number>>[] =
Arr.zeros(asUint32(branchCount)).map(() => observable.box(0));
// Dynamic dependency: only tracks the branch selected by selector
const result = computed(() => {
const sel = selector.get();
const target = branches[sel];
if (target === undefined) {
return 0;
}
return target.get();
});
let mut_lastValue = 0;
const dispose = reaction(
() => result.get(),
(value) => {
mut_lastValue = value ?? 0;
},
{ fireImmediately: true },
);
// Update an INACTIVE branch (branch 1, while selector = 0)
const inactiveBranch = branches[1];
if (inactiveBranch === undefined) {
throw new Error('need at least 2 branches');
}
for (let mut_i = 1; mut_i <= k; mut_i++) {
inactiveBranch.set(mut_i);
}
dispose();
return mut_lastValue;
};
export const runBenchmark = (k: number, branchCount: number): number => {
const selectorAtom = atom(0);
const branchAtoms: readonly WritableAtom<
number,
Mutable<readonly [number]>,
void
>[] = Arr.zeros(asUint32(branchCount)).map(() => atom(0));
// Dynamic dependency: only reads the branch selected by selectorAtom
const resultAtom = atom((get) => {
const sel = get(selectorAtom);
const targetAtom = branchAtoms[sel];
if (targetAtom === undefined) {
return 0;
}
return get(targetAtom);
});
const store = createStore();
let mut_lastValue = store.get(resultAtom);
const unsub = store.sub(resultAtom, () => {
mut_lastValue = store.get(resultAtom);
});
// Update an INACTIVE branch (branch 1, while selector = 0)
const inactiveBranchAtom = branchAtoms[1];
if (inactiveBranchAtom === undefined) {
throw new Error('need at least 2 branches');
}
for (let mut_i = 1; mut_i <= k; mut_i++) {
store.set(inactiveBranchAtom, mut_i);
}
unsub();
return mut_lastValue;
};
  • Warmup: 5 rounds (discarded).
  • Measured: 20 rounds.
  • NN: 10510^5 updates per round.
  • Timing: performance.now() (sub-millisecond precision).
  • Statistics: median, min, max, p95, ops/sec (based on median).
  • Redux: Default middleware (thunk, serializable check, immutability check) is disabled via middleware: () => new Tuple(). These are development-time checks that are tree-shaken in production builds.
  • Zustand (Derived Chain only): Derived values are computed inline in the selector (counter * 2 * 2) since Zustand does not have a separate derived-value primitive. Zustand is excluded from the Diamond Dependency scenario for the same reason — there is no mechanism to define independent derived values and combine them.
  • Valtio (Derived Chain only): Like Zustand, derived values are computed inline inside the subscribe callback (state.counter * 2 * 2) since Valtio’s derive utility is a separate package and the vanilla API does not provide a built-in derived-value primitive. The third argument true enables synchronous notification so that the subscriber fires on every mutation (Valtio batches asynchronously by default). Valtio is excluded from the Diamond Dependency scenario for the same reason.
  • RxJS (Diamond Dependency): combineLatest fires the subscriber twice per source update due to the glitch problem. The benchmark measures wall-clock time including this extra work, which reflects the real-world cost of diamond dependencies in RxJS.
  • MobX: reaction is used instead of autorun to match other libraries’ subscribe-and-record pattern. runInAction wraps each mutation as required by MobX strict mode.
  • Jotai: store.sub() does not pass the current value to the callback, so an additional store.get() call is made inside the listener. The initial value is also read via store.get() before subscribing.
  • All libraries create their state, derived values, and subscriptions inside the benchmark function. Setup cost is included in the measurement.

Jotai is the slowest library in both scenarios despite having glitch-free semantics. The root cause is per-update framework overhead — work that Jotai’s store performs on every store.set() call, regardless of graph size.

When store.set(counterAtom, i) is called, Jotai’s store executes these steps:

  1. Value update — write the new value and compare with Object.is. O(1)O(1).
  2. Invalidation pass (invalidateDependents) — DFS traversal of all mounted dependents, marking each as “needs recomputation” via epoch number tracking in a Map<Atom, EpochNumber>. O(N)O(N) where NN = number of affected atoms.
  3. Topological sort (recomputeInvalidatedAtoms) — a second DFS post-order traversal using WeakSet for visit tracking, producing a correctly ordered list of atoms to recompute. O(N)O(N).
  4. Recomputation — for each atom in topological order, execute its read function. Inside the read function, each get(dep) call: (a) looks up the dependency’s AtomState, (b) compares epoch numbers to check staleness, (c) records the dependency in a Map for future invalidation. The cost is O(Di)O(\sum D_i) — the sum of direct dependencies across all recomputed atoms, not the product. In a chain of N atoms each with 1 dependency, this is O(N)O(N).
  5. Subscriber notification — fire listener callbacks. Jotai’s store.sub() does not pass the new value, so the callback must call store.get() again, triggering an additional cache lookup. O(1)O(1) per listener, but adds one extra traversal.
StepSynStateJotai
Graph resolutionOnce at construction timeEvery store.set() (steps 2–3)
PropagationDirect function calls through pre-built subscription chainread function execution with getter spy + Map/WeakSet operations
Dependency trackingStatic (set at construction time)Dynamic (rediscovered on every evaluation via getter spy)
Staleness checkNot needed (push-based)Epoch number comparison per get() call
Subscriber value deliveryValue passed directly to callbackCallback must call store.get() to read the value

Where the constant-factor difference is visible

Section titled “Where the constant-factor difference is visible”

The benchmark loop runs 100,000 updates to make the per-update difference measurable. But these constant-factor costs also matter in real applications with high-frequency events.

For example, mousemove and pointermove events fire dozens of times per second. If each event triggers a reactive pipeline — such as a drag-and-drop handler, a tooltip position, or a canvas animation — the per-update overhead is multiplied by the event rate. At higher chain depths, the gap becomes dramatic: in the Throughput demo (depth-100 chain, 500 updates per frame), Jotai’s ms/frame exceeds the 16ms budget and visibly stutters, while SynState stays well within budget.

The root cause of the per-update gap:

  • SynState: setCounter(i) → call doubled’s map function → call quadrupled’s map function → call subscriber. Three direct function calls with no framework bookkeeping.
  • Jotai: store.set(counterAtom, i) → DFS invalidation (WeakSet alloc + Map writes) → DFS topological sort (WeakSet alloc + array push + reverse) → recompute doubledAtom (Map lookup + epoch check + Map write) → recompute quadrupledAtom (same) → flush callback → store.get(quadrupledAtom) (cache lookup).

Over 100,000 iterations, SynState’s three function calls dominate, while Jotai’s per-update bookkeeping (Map/WeakSet allocations, epoch comparisons, two DFS traversals) accumulates into the observed ~29× difference.