Performance Benchmark
Scenario: Derived Chain
Section titled “Scenario: Derived Chain”A simple linear chain that measures pure overhead of each library’s reactive primitives:
- Loop: update counter synchronously from to ().
- Verify: final value .
- No diamond dependency — pure propagation overhead comparison.
Results
Section titled “Results”| Library | Median (ms) | Min (ms) | Max (ms) | p95 (ms) | Ops/sec |
|---|---|---|---|---|---|
| SynState | 13.39 | 10.28 | 20.04 | 19.62 | 7,465,632 |
| RxJS | 3.79 | 3.67 | 4.12 | 4.11 | 26,392,839 |
| MobX | 89.36 | 88.48 | 90.74 | 90.14 | 1,119,054 |
| Jotai | 392.07 | 374.80 | 426.35 | 420.00 | 255,059 |
| Redux | 202.22 | 191.80 | 251.83 | 242.93 | 494,500 |
| Zustand | 2.96 | 2.86 | 3.74 | 3.38 | 33,751,925 |
| Valtio | 22.04 | 21.25 | 22.82 | 22.82 | 4,537,372 |
Implementation Details
Section titled “Implementation Details”Each library implements the same derived chain pattern. All implementations are runnable as vitest tests.
SynState
Section titled “SynState”export const runBenchmark = (n: number): number => { const [counter, setCounter] = createState(0);
const doubled = counter.pipe(map((x) => x * 2));
const quadrupled = doubled.pipe(map((x) => x * 2));
let mut_lastValue = 0;
const subscription = quadrupled.subscribe((v) => { mut_lastValue = v; });
for (let mut_i = 1; mut_i <= n; mut_i++) { setCounter(mut_i); }
subscription.unsubscribe();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const counter = new BehaviorSubject(0);
const doubled = counter.pipe(map((x) => x * 2));
const quadrupled = doubled.pipe(map((x) => x * 2));
let mut_lastValue = 0;
const subscription = quadrupled.subscribe((v) => { mut_lastValue = v; });
for (let mut_i = 1; mut_i <= n; mut_i++) { counter.next(mut_i); }
subscription.unsubscribe();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const state = observable({ counter: 0 });
const doubled = computed(() => state.counter * 2);
const quadrupled = computed(() => doubled.get() * 2);
let mut_lastValue = 0;
const dispose = reaction( () => quadrupled.get(), (value) => { mut_lastValue = value; }, { fireImmediately: true }, );
for (let mut_i = 1; mut_i <= n; mut_i++) { runInAction(() => { state.counter = mut_i; }); }
dispose();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const counterAtom = atom(0);
const doubledAtom = atom((get) => get(counterAtom) * 2);
const quadrupledAtom = atom((get) => get(doubledAtom) * 2);
const store = createStore();
let mut_lastValue = store.get(quadrupledAtom);
const unsub = store.sub(quadrupledAtom, () => { mut_lastValue = store.get(quadrupledAtom); });
for (let mut_i = 1; mut_i <= n; mut_i++) { store.set(counterAtom, mut_i); }
unsub();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const counterSlice = createSlice({ name: 'counter', initialState: { value: 0 }, reducers: { set: (state, action: Readonly<{ payload: number }>) => { state.value = action.payload; }, }, });
const store = configureStore({ reducer: counterSlice.reducer, middleware: () => new Tuple(), });
// eslint-disable-next-line unicorn/consistent-function-scoping const selectCounter = (state: Readonly<{ value: number }>): number => state.value;
const selectDoubled = createSelector( selectCounter, (counter) => counter * 2, );
const selectQuadrupled = createSelector( selectDoubled, (doubled) => doubled * 2, );
let mut_lastValue = selectQuadrupled(store.getState());
store.subscribe(() => { mut_lastValue = selectQuadrupled(store.getState()); });
for (let mut_i = 1; mut_i <= n; mut_i++) { store.dispatch(counterSlice.actions.set(mut_i)); }
return mut_lastValue;};Zustand
Section titled “Zustand”export const runBenchmark = (n: number): number => { const store = createStore<Readonly<{ counter: number }>>()(() => ({ counter: 0, }));
const selectQuadrupled = (state: Readonly<{ counter: number }>): number => state.counter * 2 * 2;
let mut_lastValue = selectQuadrupled(store.getState());
store.subscribe((state) => { mut_lastValue = selectQuadrupled(state); });
for (let mut_i = 1; mut_i <= n; mut_i++) { store.setState({ counter: mut_i }); }
return mut_lastValue;};Valtio
Section titled “Valtio”export const runBenchmark = (n: number): number => { const state = proxy({ counter: 0 });
let mut_lastValue = state.counter * 2 * 2;
const unsubscribe = subscribe( state, () => { mut_lastValue = state.counter * 2 * 2; }, true, );
for (let mut_i = 1; mut_i <= n; mut_i++) { state.counter = mut_i; }
unsubscribe();
return mut_lastValue;};Scenario: Diamond Dependency
Section titled “Scenario: Diamond Dependency”A diamond-shaped dependency graph that tests how each library handles multiple derived values merging back into one:
- Loop: update counter synchronously from to ().
- Verify: final value .
- Zustand is excluded because it has no mechanism to combine separate derived values.
Results
Section titled “Results”| Library | Median (ms) | Min (ms) | Max (ms) | p95 (ms) | Ops/sec |
|---|---|---|---|---|---|
| SynState | 19.54 | 17.93 | 25.79 | 22.63 | 5,118,062 |
| RxJS | 9.31 | 8.86 | 9.68 | 9.57 | 10,737,673 |
| MobX | 99.90 | 98.57 | 101.49 | 101.28 | 1,000,966 |
| Jotai | 550.21 | 543.65 | 560.05 | 557.63 | 181,750 |
| Redux | 317.42 | 289.68 | 362.50 | 339.75 | 315,041 |
Implementation Details
Section titled “Implementation Details”SynState
Section titled “SynState”export const runBenchmark = (n: number): number => { const [counter, setCounter] = createState(0);
const doubled = counter.pipe(map((x) => x * 2));
const tripled = counter.pipe(map((x) => x * 3));
const sum = combine([doubled, tripled]).pipe(map(([d, t]) => d + t));
let mut_lastValue = 0;
const subscription = sum.subscribe((v) => { mut_lastValue = v; });
for (let mut_i = 1; mut_i <= n; mut_i++) { setCounter(mut_i); }
subscription.unsubscribe();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const counter = new BehaviorSubject(0);
const doubled = counter.pipe(map((x) => x * 2));
const tripled = counter.pipe(map((x) => x * 3));
const sum = combineLatest([doubled, tripled]).pipe(map(([d, t]) => d + t));
let mut_lastValue = 0;
const subscription = sum.subscribe((v) => { mut_lastValue = v; });
for (let mut_i = 1; mut_i <= n; mut_i++) { counter.next(mut_i); }
subscription.unsubscribe();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const state = observable({ counter: 0 });
const doubled = computed(() => state.counter * 2);
const tripled = computed(() => state.counter * 3);
const sum = computed(() => doubled.get() + tripled.get());
let mut_lastValue = 0;
const dispose = reaction( () => sum.get(), (value) => { mut_lastValue = value; }, { fireImmediately: true }, );
for (let mut_i = 1; mut_i <= n; mut_i++) { runInAction(() => { state.counter = mut_i; }); }
dispose();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const counterAtom = atom(0);
const doubledAtom = atom((get) => get(counterAtom) * 2);
const tripledAtom = atom((get) => get(counterAtom) * 3);
const sumAtom = atom((get) => get(doubledAtom) + get(tripledAtom));
const store = createStore();
let mut_lastValue = store.get(sumAtom);
const unsub = store.sub(sumAtom, () => { mut_lastValue = store.get(sumAtom); });
for (let mut_i = 1; mut_i <= n; mut_i++) { store.set(counterAtom, mut_i); }
unsub();
return mut_lastValue;};export const runBenchmark = (n: number): number => { const counterSlice = createSlice({ name: 'counter', initialState: { value: 0 }, reducers: { set: (state, action: Readonly<{ payload: number }>) => { state.value = action.payload; }, }, });
const store = configureStore({ reducer: counterSlice.reducer, middleware: () => new Tuple(), });
// eslint-disable-next-line unicorn/consistent-function-scoping const selectCounter = (state: Readonly<{ value: number }>): number => state.value;
const selectDoubled = createSelector( selectCounter, (counter) => counter * 2, );
const selectTripled = createSelector( selectCounter, (counter) => counter * 3, );
const selectSum = createSelector( selectDoubled, selectTripled, (doubled, tripled) => doubled + tripled, );
let mut_lastValue = selectSum(store.getState());
store.subscribe(() => { mut_lastValue = selectSum(store.getState()); });
for (let mut_i = 1; mut_i <= n; mut_i++) { store.dispatch(counterSlice.actions.set(mut_i)); }
return mut_lastValue;};Scenario: Deep Chain Throughput
Section titled “Scenario: Deep Chain Throughput”The first two scenarios use fixed-size graphs. This scenario matches the Throughput interactive demo: a depth- scan chain where each stage lerps toward the previous stage, with a combine that reads all outputs. source updates are pushed synchronously per measurement — simulating updates per animation frame.
The following diagram shows the graph structure at . In the actual benchmark, ranges from to :
- (updates per measurement): , , .
- (chain depth): , , .
- Zustand, Valtio, and Redux are excluded (no mechanism to express a stateful
scanchain with independent derived values). - Measurements exceeding 5,000 ms are aborted.
Results
Section titled “Results”| Library | K=100, M=50 | K=500, M=50 | K=1000, M=50 | K=100, M=100 | K=500, M=100 | K=1000, M=100 | K=500, M=200 | K=1000, M=200 |
|---|---|---|---|---|---|---|---|---|
| SynState | 0.4 ms | 1.0 ms | 1.5 ms | 0.3 ms | 1.6 ms | 2.8 ms | 2.9 ms | 5.6 ms |
| RxJS | 3.3 ms | 14.6 ms | 31.1 ms | 14.2 ms | 66.7 ms | 135.0 ms | 335.3 ms | 678.4 ms |
| Jotai | 9.2 ms | 46.5 ms | 91.3 ms | 17.2 ms | 89.6 ms | 182.9 ms | 194.5 ms | 398.5 ms |
| MobX | 18.9 ms | 79.4 ms | 156.2 ms | 30.0 ms | 150.1 ms | 289.9 ms | 289.8 ms | 575.2 ms |
Key observations
Section titled “Key observations”- SynState scales linearly with both and — at , , it finishes in 5.6 ms. Its push-based direct function calls have negligible per-stage overhead.
- RxJS takes 678 ms at , — 121× slower than SynState. As explained above, this is not a constant-factor difference but a quadratic asymptotic cost from
combineLatest’s redundant emissions. The same blowup is visible in the interactive demo. - Jotai takes 399 ms at , — 71× slower than SynState — due to
selectAtom’s per-update overhead (epoch-based recomputation and dynamic dependency tracking on every propagation step). - MobX takes 575 ms at , — 103× slower than SynState. This scenario requires
scan(stateful accumulation: each stage needs its own previous value plus the new input). MobX’scomputedcannot express this because it is a pure derivation with no access to its own previous output. The only option isobservable.box+reaction(push-based), which incurs per-stage overhead from proxy tracking, dependency notification, andautorunre-reads. In contrast, the cascaded diamond scenario uses stateless derivations thatcomputedcan handle efficiently via lazy evaluation. - At , — a realistic scenario for 60fps animation with moderate graph complexity — SynState (1.6 ms) comfortably fits within the 16ms frame time, while RxJS (67 ms), Jotai (90 ms), and MobX (150 ms) all far exceed it.
Scenario: Cascaded Diamond
Section titled “Scenario: Cascaded Diamond”This scenario demonstrates the exponential blowup of RxJS’s combineLatest when diamonds are chained in series. Each stage splits the previous output into two branches (fan-out ) and recombines them, creating a chain of binary diamonds:
The diagram shows . In the benchmark, ranges from to .
- (source updates): .
- (cascade depth): .
- Measurements exceeding 5,000 ms are aborted.
Results
Section titled “Results”| Library | N=2 | N=4 | N=6 | N=8 | N=10 | N=12 | N=14 | N=16 | N=18 | N=20 |
|---|---|---|---|---|---|---|---|---|---|---|
| SynState | 0.5 ms | 0.6 ms | 0.5 ms | 0.5 ms | 0.8 ms | 1.9 ms | 6.2 ms | 23.1 ms | 90.8 ms | 359.2 ms |
| RxJS | 0.2 ms | 0.5 ms | 2.1 ms | 11.1 ms | 60.2 ms | 373.9 ms | 2018.1 ms | > 5000 ms | > 5000 ms | > 5000 ms |
| Jotai | 1.5 ms | 1.9 ms | 2.5 ms | 5.3 ms | 7.2 ms | 13.0 ms | 35.9 ms | 116.4 ms | 414.5 ms | 1666.5 ms |
| MobX | 1.0 ms | 0.6 ms | 0.9 ms | 0.8 ms | 0.9 ms | 0.9 ms | 1.0 ms | 1.1 ms | 1.2 ms | 1.3 ms |
Key observations
Section titled “Key observations”- RxJS exhibits classic exponential growth (binary fan-out). It times out at ( emissions per source update updates). At , it already takes 2,018 ms — over 300× slower than SynState (6.2 ms).
- MobX stays under 1.3 ms even at , thanks to its lazy
computedevaluation. MobX’scomputedvalues are not eagerly re-evaluated when their dependencies change — they are only recomputed when read. Thereactionreads only the finalcomputed, which triggers a single lazy pass through the chain. Eachcomputedin the chain is evaluated exactly once per update, yielding total work with minimal overhead. This contrasts sharply with the deep chain throughput scenario where MobX is 103× slower than SynState — because that scenario uses a chain ofreactions (push-based), notcomputeds (pull-based). The performance characteristics depend heavily on which MobX primitive is used. - SynState propagates in via depth-ordered traversal, but the benchmark includes graph construction cost in each
runBenchmarkcall (creating observables with binary search insertion intopropagationOrder). This construction overhead grows with and dominates at large depths. At , SynState takes 359 ms. In applications where the graph is constructed once and updated many times, the per-update cost is as shown in the deep chain throughput scenario. - Jotai grows faster than due to its per-
store.set()DFS overhead compounding with the number of derived atoms. At , it takes 1,667 ms.
Scenario: Conditional Fan-Out
Section titled “Scenario: Conditional Fan-Out”The previous scenarios test propagation throughput — every source update reaches the subscriber. This scenario tests a different pattern: the cost of updating an observable that is not in the subscription set — the case where dynamic dependency tracking is most advantageous.
The graph has B branches and a selector. The selector is fixed at throughout the benchmark, so only branch[0] is “active”. The benchmark updates branch[1] (an inactive branch) times.
The key difference between libraries is whether branch[1] has any observers (subscribers):
- Static subscriptions (SynState, RxJS):
combinesubscribes to all branches at construction time, sobranch[1]has an observer. Whenbranch[1]is updated,combinefires,mapproduces the same value as before, and equality comparison prevents downstream propagation. Cost scales with B because the combined array grows. - Dynamic subscriptions (Jotai, MobX): The
computedor derived atom only subscribes to observables that were actually accessed via.get()during execution. Whenselector = 0, onlybranch[0].get()is called, sobranch[1]’s observer list is empty. Updatingbranch[1]has nothing to notify — nothing happens.
Results
Section titled “Results”| Library | B=2 | B=5 | B=10 | B=20 | B=50 | B=100 | B=200 | B=500 | B=1000 |
|---|---|---|---|---|---|---|---|---|---|
| SynState | 24.2 ms | 24.2 ms | 33.7 ms | 48.1 ms | 90.8 ms | 154.0 ms | 297.4 ms | 703.3 ms | 1466.0 ms |
| RxJS | 5.6 ms | 5.3 ms | 5.1 ms | 5.4 ms | 6.1 ms | 9.2 ms | 22.3 ms | 37.2 ms | 61.4 ms |
| Jotai | 69.6 ms | 69.8 ms | 73.7 ms | 68.8 ms | 72.1 ms | 69.0 ms | 70.5 ms | 72.5 ms | 69.9 ms |
| MobX | 2.3 ms | 2.5 ms | 2.4 ms | 2.4 ms | 2.4 ms | 2.4 ms | 2.4 ms | 2.4 ms | 2.7 ms |
Key observations
Section titled “Key observations”-
MobX is the fastest in this scenario.
computed(() => branches[selector.get()].get())subscribes only toselectorandbranches[0]at runtime, sobranches[1]’s observer list is empty. Updatingbranches[1]has no one to notify — the cost is constant at ~2.4 ms regardless of B. -
SynState’s cost scales linearly with B because
combinesubscribes to all branches at construction time, so any branch update fires the combine. The linear trend is clearly visible in the chart: from 24 ms at B=2 to 1,466 ms at B=1000 (~1.5μs per update per branch). -
RxJS uses the same static
combineLatestapproach. Its cost also scales with B but more slowly than SynState, becausecombineLatestdoes not perform equality comparison — it always emits, and the subscriber does minimal work (assigning a value). -
Jotai uses dynamic subscriptions like MobX —
resultAtomdoes not subscribe to the inactive branch, so no recomputation is triggered. However,store.set()itself performs bookkeeping operations (invalidation checks) inside the atom store, producing a constant overhead of ~70 ms regardless of B. This makes Jotai slower than SynState for B≤50.
Implementation details
Section titled “Implementation details”SynState
Section titled “SynState”export const runBenchmark = (k: number, branchCount: number): number => { const [selector] = createState(0);
const branches: DeepReadonly< { source: Observable<number>; set: (v: number) => void; }[] > = Arr.zeros(asUint32(branchCount)).map(() => { const [source, setSource] = createState(0);
return { source, set: setSource }; });
const allSources = [ selector, ...branches.map((b) => b.source), ] as const satisfies NonEmptyArray<Observable<number>>;
const result = combine(allSources).pipe( map( ([selectorValue, ...branchesValue]) => branchesValue[selectorValue] ?? 0, ), );
let mut_lastValue = 0;
const subscription = result.subscribe((v) => { mut_lastValue = v; });
// Update an INACTIVE branch (branch 1, while selector = 0) const inactiveBranch = branches[1];
if (inactiveBranch === undefined) { throw new Error('need at least 2 branches'); }
for (let mut_i = 1; mut_i <= k; mut_i++) { inactiveBranch.set(mut_i); }
subscription.unsubscribe();
return mut_lastValue;};export const runBenchmark = (k: number, branchCount: number): number => { const selector = new BehaviorSubject(0);
const branches: readonly BehaviorSubject<number>[] = Arr.zeros( asUint32(branchCount), ).map(() => new BehaviorSubject(0));
const result = combineLatest([selector, ...branches]).pipe( map((values) => { const sel = values[0];
return values[sel + 1] ?? 0; }), );
let mut_lastValue = 0;
const subscription = result.subscribe((v) => { mut_lastValue = v; });
// Update an INACTIVE branch (branch 1, while selector = 0) const inactiveBranch = branches[1];
if (inactiveBranch === undefined) { throw new Error('need at least 2 branches'); }
for (let mut_i = 1; mut_i <= k; mut_i++) { inactiveBranch.next(mut_i); }
subscription.unsubscribe();
return mut_lastValue;};export const runBenchmark = (k: number, branchCount: number): number => { const selector = observable.box(0);
const branches: readonly ReturnType<typeof observable.box<number>>[] = Arr.zeros(asUint32(branchCount)).map(() => observable.box(0));
// Dynamic dependency: only tracks the branch selected by selector const result = computed(() => { const sel = selector.get();
const target = branches[sel];
if (target === undefined) { return 0; }
return target.get(); });
let mut_lastValue = 0;
const dispose = reaction( () => result.get(), (value) => { mut_lastValue = value ?? 0; }, { fireImmediately: true }, );
// Update an INACTIVE branch (branch 1, while selector = 0) const inactiveBranch = branches[1];
if (inactiveBranch === undefined) { throw new Error('need at least 2 branches'); }
for (let mut_i = 1; mut_i <= k; mut_i++) { inactiveBranch.set(mut_i); }
dispose();
return mut_lastValue;};export const runBenchmark = (k: number, branchCount: number): number => { const selectorAtom = atom(0);
const branchAtoms: readonly WritableAtom< number, Mutable<readonly [number]>, void >[] = Arr.zeros(asUint32(branchCount)).map(() => atom(0));
// Dynamic dependency: only reads the branch selected by selectorAtom const resultAtom = atom((get) => { const sel = get(selectorAtom);
const targetAtom = branchAtoms[sel];
if (targetAtom === undefined) { return 0; }
return get(targetAtom); });
const store = createStore();
let mut_lastValue = store.get(resultAtom);
const unsub = store.sub(resultAtom, () => { mut_lastValue = store.get(resultAtom); });
// Update an INACTIVE branch (branch 1, while selector = 0) const inactiveBranchAtom = branchAtoms[1];
if (inactiveBranchAtom === undefined) { throw new Error('need at least 2 branches'); }
for (let mut_i = 1; mut_i <= k; mut_i++) { store.set(inactiveBranchAtom, mut_i); }
unsub();
return mut_lastValue;};Methodology
Section titled “Methodology”- Warmup: 5 rounds (discarded).
- Measured: 20 rounds.
- : updates per round.
- Timing:
performance.now()(sub-millisecond precision). - Statistics: median, min, max, p95, ops/sec (based on median).
Fairness Notes
Section titled “Fairness Notes”- Redux: Default middleware (thunk, serializable check, immutability check) is disabled via
middleware: () => new Tuple(). These are development-time checks that are tree-shaken in production builds. - Zustand (Derived Chain only): Derived values are computed inline in the selector (
counter * 2 * 2) since Zustand does not have a separate derived-value primitive. Zustand is excluded from the Diamond Dependency scenario for the same reason — there is no mechanism to define independent derived values and combine them. - Valtio (Derived Chain only): Like Zustand, derived values are computed inline inside the
subscribecallback (state.counter * 2 * 2) since Valtio’sderiveutility is a separate package and the vanilla API does not provide a built-in derived-value primitive. The third argumenttrueenables synchronous notification so that the subscriber fires on every mutation (Valtio batches asynchronously by default). Valtio is excluded from the Diamond Dependency scenario for the same reason. - RxJS (Diamond Dependency):
combineLatestfires the subscriber twice per source update due to the glitch problem. The benchmark measures wall-clock time including this extra work, which reflects the real-world cost of diamond dependencies in RxJS. - MobX:
reactionis used instead ofautorunto match other libraries’ subscribe-and-record pattern.runInActionwraps each mutation as required by MobX strict mode. - Jotai:
store.sub()does not pass the current value to the callback, so an additionalstore.get()call is made inside the listener. The initial value is also read viastore.get()before subscribing. - All libraries create their state, derived values, and subscriptions inside the benchmark function. Setup cost is included in the measurement.
Analysis: Why Jotai Is ~29× Slower
Section titled “Analysis: Why Jotai Is ~29× Slower”Jotai is the slowest library in both scenarios despite having glitch-free semantics. The root cause is per-update framework overhead — work that Jotai’s store performs on every store.set() call, regardless of graph size.
Per-update cost breakdown
Section titled “Per-update cost breakdown”When store.set(counterAtom, i) is called, Jotai’s store executes these steps:
- Value update — write the new value and compare with
Object.is. . - Invalidation pass (
invalidateDependents) — DFS traversal of all mounted dependents, marking each as “needs recomputation” via epoch number tracking in aMap<Atom, EpochNumber>. where = number of affected atoms. - Topological sort (
recomputeInvalidatedAtoms) — a second DFS post-order traversal usingWeakSetfor visit tracking, producing a correctly ordered list of atoms to recompute. . - Recomputation — for each atom in topological order, execute its
readfunction. Inside the read function, eachget(dep)call: (a) looks up the dependency’sAtomState, (b) compares epoch numbers to check staleness, (c) records the dependency in aMapfor future invalidation. The cost is — the sum of direct dependencies across all recomputed atoms, not the product. In a chain of N atoms each with 1 dependency, this is . - Subscriber notification — fire listener callbacks. Jotai’s
store.sub()does not pass the new value, so the callback must callstore.get()again, triggering an additional cache lookup. per listener, but adds one extra traversal.
Comparison with SynState
Section titled “Comparison with SynState”| Step | SynState | Jotai |
|---|---|---|
| Graph resolution | Once at construction time | Every store.set() (steps 2–3) |
| Propagation | Direct function calls through pre-built subscription chain | read function execution with getter spy + Map/WeakSet operations |
| Dependency tracking | Static (set at construction time) | Dynamic (rediscovered on every evaluation via getter spy) |
| Staleness check | Not needed (push-based) | Epoch number comparison per get() call |
| Subscriber value delivery | Value passed directly to callback | Callback must call store.get() to read the value |
Where the constant-factor difference is visible
Section titled “Where the constant-factor difference is visible”The benchmark loop runs 100,000 updates to make the per-update difference measurable. But these constant-factor costs also matter in real applications with high-frequency events.
For example, mousemove and pointermove events fire dozens of times per second. If each event triggers a reactive pipeline — such as a drag-and-drop handler, a tooltip position, or a canvas animation — the per-update overhead is multiplied by the event rate. At higher chain depths, the gap becomes dramatic: in the Throughput demo (depth-100 chain, 500 updates per frame), Jotai’s ms/frame exceeds the 16ms budget and visibly stutters, while SynState stays well within budget.
The root cause of the per-update gap:
- SynState:
setCounter(i)→ calldoubled’s map function → callquadrupled’s map function → call subscriber. Three direct function calls with no framework bookkeeping. - Jotai:
store.set(counterAtom, i)→ DFS invalidation (WeakSet alloc + Map writes) → DFS topological sort (WeakSet alloc + array push + reverse) → recomputedoubledAtom(Map lookup + epoch check + Map write) → recomputequadrupledAtom(same) → flush callback →store.get(quadrupledAtom)(cache lookup).
Over 100,000 iterations, SynState’s three function calls dominate, while Jotai’s per-update bookkeeping (Map/WeakSet allocations, epoch comparisons, two DFS traversals) accumulates into the observed ~29× difference.