Skip to content

Interactive Demos

See the Diamond Dependency Glitch in Action

Section titled “See the Diamond Dependency Glitch in Action”

When two derived values depend on the same source (a diamond dependency), some reactive libraries emit inconsistent intermediate states — known as glitches. This demo lets you see the difference with your own eyes.

Click and drag on each canvas to draw a trail. Each dot is placed by the library’s reactive pipeline using the same diamond dependency graph:

mousePos (source: {x, y})
├── derivedX = map(pos => pos.x)
├── derivedY = map(pos => pos.y)
└── combine([derivedX, derivedY])
└── draw dot at {x, y} + check consistency

Each emitted position is compared to the latest mouse position:

  • Blue dot — emitted (x, y) matches the actual mouse position (consistent).
  • Red dot — one axis updated but the other didn’t yet (glitch — the dot appears off the drag path).
LibraryGlitchesWhy
SynState0Depth-based topological update — all branches resolve atomically before notifying subscribers
RxJSManycombineLatest fires on each input change, so derivedX updates before derivedY causing intermediate emissions with stale data
Jotai0Pull-based derivation ensures consistency — derived atoms always read the latest values
MobX0computed values are lazily evaluated inside a batched reaction — all inputs are up-to-date when read

With only 2 derived values, all four libraries are fast enough that the performance difference is negligible. But the extra emissions from RxJS become a serious performance problem as the graph grows — see the deep dependency chain demo below.

In real applications, diamond dependencies appear everywhere — computed styles from shared state, derived UI values, form validations. Glitches cause:

  • Visual flicker — UI renders an impossible intermediate state.
  • Wasted work — each extra emission triggers unnecessary side effects. With NN inputs to combineLatest, a single source update causes N+1N+1 emissions — this scales to an O(N2)O(N^2) performance cliff as shown in the demo below.
  • Logic bugs — side effects fire with inconsistent data.

SynState eliminates these problems at the library level, with no manual workarounds needed.


Deep Dependency Chain: Propagation Under Load

Section titled “Deep Dependency Chain: Propagation Under Load”

This demo measures the serial dependency chain propagation performance of each reactive framework. All four libraries build an identical reactive graph topology: a depth-NN chain of stateful nodes where each depends on the previous, with a final combine/derive that reads all N+1N+1 outputs.

A snake tail follows the mouse — each segment is a stage in the chain that smoothly follows the previous stage via linear interpolation (lerp). To draw the full snake, all N+1N+1 outputs (head + NN stages) are collected into a single combine:

The diagram shows N=3N = 3. In the demo, NN is adjustable via a slider. Increase the chain depth and watch for stutter and rising μs/update.

  • At low NN (10–50): all four libraries render smoothly.
  • At high NN: RxJS starts stuttering first — check the Total updates counter to see why.
  • SynState, Jotai, and MobX stay smoother at higher NN, but Jotai and MobX show higher μs/update due to per-update overhead in their reactive engines.

Check the Total updates counter: RxJS shows roughly N+1N+1 times more updates than SynState for the same mouse movement. combineLatest fires every time any of its N+1N+1 inputs emits. On a single mouse move, the scan chain propagates sequentially:

  1. source emits → combineLatest fires (only source updated, scans still hold old values)
  2. scan₁ emits → combineLatest fires again
  3. scan₂ emits → fires again
  4. … for all NN stages

Each of these N+1N+1 firings triggers a full canvas redraw of N+1N+1 points → O(N2)O(N^2). This is the glitch problem from the first demo manifesting as a performance cliff.

LibraryGraph depthPropagationSubscriber calls per mouse eventTotal work per event
SynStateNNpush (atomic)1 (atomic combine)O(N)O(N)
JotaiNNpull (lazy)1 (derived read)O(N)O(N), higher constant
MobXNNpush (batched)1 (autorun)O(N)O(N), higher constant
RxJSNNpush (eager)N+1N+1 (combineLatest glitch)O(N2)O(N^2)

SynState, Jotai, and MobX all fire the subscriber once per mouse event, making them all O(N)O(N). But the per-update constant factor differs significantly — SynState’s direct function calls have much less overhead than Jotai’s DFS recomputation or MobX’s proxy-based tracking. The next demo isolates this difference.


Throughput: Per-Update Overhead Under Load

Section titled “Throughput: Per-Update Overhead Under Load”

This demo removes RxJS (whose O(N2)O(N^2) blowup would dominate) and focuses on the three glitch-free libraries to make the constant-factor difference visible.

A ball automatically orbits in a circle. Each canvas runs the same depth-MM reactive chain as the spring demo above. The difference: instead of 1 source update per mouse event, KK source updates are pushed through the chain every animation frame in a tight synchronous loop.

each animation frame:
for k in 0..K-1:
update source position (micro-step along orbit)
→ propagate through depth-M scan chain
→ subscriber records latest points
draw snake once

This amplifies the per-update overhead by a factor of KK. When the overhead exceeds 16ms, the animation drops frames and stutters.

  • At low KK (1–100): all three libraries animate smoothly.
  • Increase KK toward 500–1000: Jotai’s and MobX’s ms/frame climbs past 16ms and the animation visibly stutters, while SynState remains smooth.
  • The ms/frame indicator turns red when it exceeds the 16ms (60fps) threshold.

Each of the KK source updates triggers the full update pipeline:

Jotai:

  1. store.set() → DFS invalidation of MM dependents → O(M)O(M)
  2. DFS topological sort to determine recomputation order → O(M)O(M)
  3. Recompute each selectAtom with epoch checks and dynamic dependency tracking → O(M)O(M)

Total per frame: K×O(M)K \times O(M) with high constant factor (Map/WeakSet allocations, epoch comparisons, two DFS traversals per tick).

MobX:

  1. head.set() → triggers reactionrunInAction propagates through MM stages → O(M)O(M)
  2. Each stage.set() notifies MobX’s dependency tracking system
  3. After the action, autorun re-reads all MM stages → O(M)O(M)

Total per frame: K×O(M)K \times O(M) with per-stage overhead from MobX’s proxy-based tracking and reaction scheduling.

SynState’s push-based engine propagates through the same chain via direct function calls with the propagation order resolved once at construction time — no per-update graph traversal, no dynamic dependency tracking, no epoch bookkeeping.

LibraryPer-update workK=500K=500, M=100M=100
SynState100 direct function calls~2–5ms
Jotai2× DFS(100) + 100 epoch-checked recomputations~50–100ms
MobX100 observable.set + autorun re-read~30–80ms