Making MobX @computed Lazy: A 97-Line Patch with a Surprising Amount of Care
Most performance bugs are loud. Profilers light up, flame graphs grow tall pillars, and a hot loop somewhere screams for attention.
The bug in MobX issue #4616 was the opposite. Quiet, polite, easy to ignore. It just wasted a small amount of memory and a small amount of CPU for every @computed getter on every instance you ever constructed — whether you read the getter or not.
This is the story of that issue, the fix that landed in mobxjs/mobx#4639, and the quiet little contracts inside MobX that made a "just be lazy" patch much more interesting than it sounds.
A 60-second tour of MobX
If you've never used MobX, here's the shortest possible mental model.
MobX is a state-management library for JavaScript and TypeScript. You mark some pieces of state as observable, you mark some pieces of derived state as computed, and you mark side effects (like rendering a React component or writing to localStorage) as reactions.
tsimport { observable, computed, makeAutoObservable } from "mobx" class Cart { @observable accessor items: Item[] = [] @computed get total() { return this.items.reduce((s, i) => s + i.price, 0) } }
When you read cart.total from inside a reaction, MobX silently records that the reaction depends on items. The next time items changes, MobX re-runs the reaction. You never wire any of that by hand.
The thing that makes @computed interesting — and the thing this PR is about — is that under the hood every decorated getter is backed by a small object called a `ComputedValue`. It owns the cached result, the dependency list, the dirty flag, and the bookkeeping that makes the "recompute only when something actually changed" trick work.
That object is small. But you can have a *lot* of them.
The issue: 3,000 ComputedValues nobody asked for
On 2026-03-04, Adam Pietrasiak (@pie6k) filed issue #4616 with a clean observation.
The 2022.3 stage-3 @computed decorator was constructing a ComputedValue for every decorated getter on every instance, the moment the instance was created. The relevant code lived in computedannotation.ts:
tsaddInitializer(function () { const adm = asObservableObject(this)[$mobx] const options = { ...ann.options_, get, context: this } options.name ||= ... adm.values_.set(key, new ComputedValue(options)) })
Adam's argument was almost arithmetic:
Say you have 1,000 instances, each with 5 `@computed` getters. Say in practice each instance only ever uses 2 out of 5. That's **3,000 `ComputedValue` objects** created, held in a Map, and eventually garbage collected — for getters that were never read.
Nothing was *broken*. Tests passed. Apps shipped. The cost was just sitting there, invisible, on every new Cart(), new Order(), new ViewModel(). A quiet tax on construction, paid in full whether you used the service or not.
And the fix Adam sketched was the obvious one: don't allocate the ComputedValue until somebody actually reads the getter.
Obvious is not the same as easy.
Why "just be lazy" is harder than it sounds
If @computed were a leaf abstraction with one entry point, this would be a five-line patch. It isn't.
A ComputedValue for a decorated getter is touched by *several* parts of MobX, and each of them quietly assumes the value already lives in adm.values_:
- The decorator getter wrapper — when you read
cart.total, the wrapper callsthis[$mobx].getObservablePropValue_(key), which doesvalues_.get(key)!.get(). That!is load-bearing — it's an assertion that the entry exists. - `isObservableProp(o, "total")` — used by app code and tooling to ask "is this property reactive?" — checks
values_.has(property). - `getAtom(o, "total")` — used by
observe, by Mobx Devtools, and by anything that wants the underlying observable for a key — also reaches intovalues_. - `setObservablePropValue_` — yes, you can assign to a computed if it has a custom setter. Same lookup.
If you simply remove the eager construction, every one of those paths starts returning undefined for getters whose owners haven't read them yet. isObservableProp lies. observe throws. The Just Works contract breaks.
So the fix isn't "be lazy." The fix is "be lazy *and* keep four other invariants intact while pretending nothing happened."
The shape of the fix
The patch introduces a second map on ObservableObjectAdministration that lives next to values_:
tslazyComputedKeys_: undefined | Map<PropertyKey, () => ComputedValue<any>>
Each entry is a tiny factory closure — a zero-argument function that, when called, builds the real ComputedValue with the right options, name, and this binding.
The decorator's addInitializer no longer builds the ComputedValue. It builds the factory and parks it in the lazy map:
tsaddInitializer(function () { const adm = asObservableObject(this)[$mobx] const target = this ;(adm.lazyComputedKeys_ ??= new Map()).set(key, () => { const options = { ...ann.options_, get, context: target } options.name ||= __DEV__ ? `${adm.name_}.${key.toString()}` : `ObservableObject.${key.toString()}` return new ComputedValue(options) }) })
That's the whole "be lazy" part. A closure costs roughly one allocation; a ComputedValue costs several plus the work its constructor does. For a getter that's never read, the closure is *all* you ever pay.
Then there's a small new method that turns a lazy entry into a real one, exactly once:
tsmaterializeLazyComputed_(key: PropertyKey): boolean { const factory = this.lazyComputedKeys_?.get(key) if (!factory) return false this.lazyComputedKeys_!.delete(key) this.values_.set(key, factory()) return true }
It's deliberately boring. Look up the factory. If there's no factory, do nothing — the value is either already materialised or doesn't exist. Otherwise, delete the lazy entry first (so we never accidentally double-materialise) and stash the freshly built ComputedValue in values_ where the rest of MobX expects to find it.
Threading the laziness through the rest of MobX
With materializeLazyComputed_ in hand, the four invariants from earlier each get one well-placed call.
Reads and writes go through the admin:
tsgetObservablePropValue_(key) { this.materializeLazyComputed_(key) return this.values_.get(key)!.get() } setObservablePropValue_(key, newValue) { this.materializeLazyComputed_(key) ... }
The decorator getter wrapper already routes through these, so cart.total keeps working with no caller changes. The ! non-null assertion stays honest because we just guaranteed the entry exists.
`observe(o, "total", ...)` resolves through `getAtom`:
tsconst adm = (thing as any)[$mobx] adm.materializeLazyComputed_(property) const observable = adm.values_.get(property)
This was the subtle one. observe doesn't *read* the computed in the userland sense — it asks for the underlying observable so it can subscribe. Without this call, you could observe() a lazy computed and immediately get a "not observable" error, even though the property absolutely *is* observable. The user just hadn't tickled it yet.
`isObservableProp` learns about the lazy map:
tsconst adm = value[$mobx] return adm.values_.has(property) || !!adm.lazyComputedKeys_?.has(property)
This one matters more than it looks. Plenty of code — including MobX's own internals and many userland helpers — guards behavior on isObservableProp. If a lazy computed reports as *not* observable until the first read, that's a behavior change visible to consumers, not just a private optimisation. The || keeps the public contract intact: a @computed getter is observable from the moment its instance is constructed, exactly like before.
What deliberately did not change
MobX has two paths into computed properties: the modern stage-3 decorators (@computed on a class with @observable accessor) and the legacy makeObservable / makeAutoObservable flow that sets things up imperatively in the constructor.
The legacy path lives in defineComputedProperty_ and was left completely alone.
That was a conscious choice, not laziness about laziness. makeObservable is called *inside the constructor*, after the user has already paid the cost of "I want this object to be reactive right now." The lazy-allocation argument is much weaker there — you're already mid-construction, the user has already opted into the work, and changing the timing could ripple into subtle ordering bugs in code that mixes makeObservable with reactions or whens in the constructor.
The decorator path is different. Decorators run at *class definition time*, and addInitializer runs once per instance regardless of whether the user reads anything. That's where the waste actually lives, so that's where the change actually applies.
It's a small example of a useful rule for library patches: fix the path where the cost is real, not every path that superficially resembles it.
Tests as the spec
The PR adds two regression tests that read like a small spec for what "lazy" should mean:
tstest("4616 - @computed decorator should be lazy", () => { let computeCount = 0 class Order { @observable accessor price: number = 3 @computed get unused() { computeCount++; return this.price * 2 } @computed get used() { return this.price * 3 } } const o = new Order() // Public: both are observable from day one t.equal(isObservableProp(o, "unused"), true) t.equal(isObservableProp(o, "used"), true) // Internal: but no ComputedValue has been built yet const adm: any = (o as any)[$mobx] expect(adm.values_.has("used")).toBe(false) expect(adm.lazyComputedKeys_.has("used")).toBe(true) expect(computeCount).toBe(0) // First read materialises just that one t.equal(o.used, 9) expect(adm.values_.has("used")).toBe(true) expect(adm.lazyComputedKeys_.has("used")).toBe(false) // The unused getter is still lazy and still never ran expect(adm.values_.has("unused")).toBe(false) expect(computeCount).toBe(0) })
The second test is the one I personally cared about most:
tstest("4616 - observe on @computed before first read materialises it", () => { class Order { @observable accessor price: number = 3 @computed get total() { return this.price * 2 } } const o = new Order() const events: number[] = [] observe(o, "total", ev => events.push((ev as any).newValue)) o.price = 4 t.deepEqual(events, [8]) })
It's the test that would have failed if I'd forgotten the getAtom change. observe *should* work on a lazy computed before any direct read, because from the user's point of view the property has been observable since the constructor returned. The test pins that down so a future refactor can't quietly regress it.
The full suite — 1,036 passing, 14 skipped — stays green. yarn test:types is also clean.
Numbers, taste, and takeaways
The whole patch is 97 added lines, 12 removed, across 6 files, including the changeset and tests. The actual logic change in MobX core is closer to thirty lines.
For something that ships in a library used by tens of thousands of apps, that ratio feels right. Most of the work isn't writing the lazy map. Most of the work is:
- Finding every place in the codebase that assumed eager construction.
- Picking the *one* right spot in each path to materialise.
- Writing tests that pin the public contract down without locking the internals.
- Knowing which sibling code path (
defineComputedProperty_) to deliberately leave alone.
The lessons I took from it:
- Cheap-looking patches in mature libraries are rarely cheap. The reason a "be lazy" change took six files instead of one is exactly the reason MobX is reliable in the first place — the contract surface is wider than the API surface.
- Public contracts are stronger than private invariants. The
isObservablePropline is two characters of code and a paragraph of reasoning. That ratio is normal in library work. - Match the optimisation to the cost. The decorator path pays for unused getters; the
makeObservablepath doesn't. Only one needed fixing. - Tests are the durable version of the design. The two regression tests will outlive any explanation I write — including this blog post.
If you want to read the diff yourself, it's mobxjs/mobx#4639. And if you've ever shipped a class with a @computed getter that some instance never touches, your memory profile is about to get a tiny bit lighter.