How I Cut a Trading Platform's Memory from 1GB to 400MB
Our trade page was eating 1GB of memory. Not occasionally — reliably, within 20 minutes of normal use. Traders were reporting page freezes, unresponsive order forms, and the dreaded Chrome "Aw, Snap!" crash on machines with 8GB RAM.
This is the story of how I found the root cause, fixed it, and brought the memory footprint down to 400MB — a 60% reduction. The core fix was a one-line change. Finding that line took three days.
The symptom
We ran a crypto derivatives trading platform — BTC/ETH perpetual futures, options, spot markets. The trade page was the core of the product: a real-time order book, position table, open orders, trade history, and multiple charts, all updating on WebSocket ticks every 100-200ms.
The first reports came from our support team. "Traders are saying the page gets slow after a while." That's not a useful bug report, but it's the kind of symptom that should make you nervous. "After a while" usually means a leak.
I opened the trade page, connected to our staging WebSocket feed, and let it run. After 15 minutes, Chrome's Task Manager showed the tab using 950MB. A fresh load was around 350MB. Something was accumulating.
The investigation
Step 1: Heap snapshots
I took three heap snapshots in Chrome DevTools — one at page load, one after 5 minutes, and one after 15 minutes.
The comparison view between snapshot 1 and 3 told me what I needed to know: massive growth in (closure) and (object elements) allocations. The retaining tree showed chains of closures — each one captured a reference to a previous state object, forming a linked list of references that the garbage collector couldn't break. The closures weren't orphaned in the traditional sense; they were being held alive by the next closure in the chain.
This pattern — a growing chain of retained closures — is different from the typical "detached DOM node" leak. There were no orphaned DOM trees. React was rendering fine. But behind every render, the old state objects were piling up because something kept referencing them.
Step 2: Performance profiling
I recorded a 30-second performance trace with the React DevTools Profiler enabled. The flame chart was alarming.
Every 100-200ms — on each WebSocket tick — I could see a cascade of renders starting from a high-level component I'll call TradePageContainer. Not just the components that consumed price data. The entire subtree: order book, position table, open orders, trade history, chart wrapper. Everything.
In a healthy React app receiving frequent data updates, you'd expect to see targeted re-renders — just the components whose props or state actually changed. Instead, this looked like the entire page was re-rendering from the root on every tick.
Step 3: Finding the cascade trigger
I worked backwards from the profiler output. The render cascade started at TradePageContainer, which was a wrapper component responsible for subscribing to the WebSocket and distributing data to child components.
Here's a simplified version of what I found:
function TradePageContainer() {
const [marketData, setMarketData] = useState({});
const ws = useWebSocket();
useEffect(() => {
const handler = (tick) => {
setMarketData({
...marketData,
[tick.symbol]: tick,
});
};
ws.on('tick', handler);
return () => ws.off('tick', handler);
}, [ws, marketData]);
return (
<div>
<OrderBook data={marketData} />
<PositionTable data={marketData} symbol="BTCUSD" />
<OpenOrders data={marketData} />
<TradeHistory data={marketData} />
</div>
);
}Do you see it?
The useEffect has marketData in its dependency array. Every time setMarketData updates the state, marketData changes (it's a new object reference). That triggers the effect cleanup and re-subscription. The effect runs again, subscribes a new handler, and on the next tick, the new handler calls setMarketData again — which creates another new object, which triggers the effect again.
This wasn't an infinite loop — each cycle was gated by the next WebSocket tick. But at 5-10 ticks per second, the effect was destroying and recreating the subscription 5-10 times per second. Each subscription closure captured a reference to the previous marketData state, preventing garbage collection of those objects. The closures formed a chain, each holding a reference to the last, and none of them could be collected.
The fix
Fix 1: Remove marketData from the dependency array
The handler doesn't need the current marketData in the dependency array. It needs the updater function pattern:
useEffect(() => {
const handler = (tick) => {
setMarketData((prev) => ({
...prev,
[tick.symbol]: tick,
}));
};
ws.on('tick', handler);
return () => ws.off('tick', handler);
}, [ws]);By using the functional updater (prev) => ..., the handler always has access to the latest state without needing it in the closure. The effect now only re-runs when ws changes (which is once, on mount). No more subscription churn, no more closure chains.
This single change cut the leak. But the memory footprint was still higher than it should have been — around 600MB after 15 minutes instead of 950MB. The leak was gone, but something else was wasteful.
Fix 2: Memoize the child components
The second problem was in the child components. Even with the fixed effect, every setMarketData call created a new top-level object. React would re-render TradePageContainer (expected), and because data={marketData} was always a new reference, every child component re-rendered too.
The PositionTable was the worst offender. It rendered 50-100 rows of position data, each with P&L calculations, liquidation prices, and formatted numbers. On every tick — even if only BTC's price changed — the entire table re-rendered, including rows for ETH, SOL, and every other position.
The fix was React.memo with a custom comparator:
const PositionTable = React.memo(
function PositionTable({ data, symbol }) {
const positions = data[symbol]?.positions ?? [];
return (
<table>
<thead>{/* ... */}</thead>
<tbody>
{positions.map((pos) => (
<PositionRow key={pos.id} position={pos} />
))}
</tbody>
</table>
);
},
(prev, next) => {
// Only re-render if this symbol's data actually changed
return prev.data[prev.symbol] === next.data[next.symbol];
}
);And each PositionRow was also memoized:
const PositionRow = React.memo(function PositionRow({ position }) {
return (
<tr>
<td>{position.symbol}</td>
<td>{formatCurrency(position.entryPrice)}</td>
<td>{formatCurrency(position.markPrice)}</td>
<td className={position.pnl >= 0 ? 'text-green' : 'text-red'}>
{formatCurrency(position.pnl)}
</td>
</tr>
);
});Same approach for OrderBook, OpenOrders, and TradeHistory. Each component only re-rendered when its specific slice of data changed.
The result
After these two fixes:
- Fresh load: 350MB (unchanged)
- After 15 minutes: 400MB (was 950MB)
- After 60 minutes: 420MB (was "Aw, Snap!")
The 400MB after 15 minutes isn't zero growth — there's legitimate state accumulation (trade history grows, order book depth changes). But it stabilizes instead of growing unbounded.
Trader-reported page freezes dropped to zero in the next release cycle. The Chrome Task Manager graph went from a relentless upward slope to a flat line with minor fluctuations.
Why this was hard to find
This bug was subtle for three reasons:
1. The effect "worked." The WebSocket subscription was active, data was flowing, the UI was updating. There was no error, no console warning, no failed render. The only symptom was gradual memory growth over minutes — not the kind of thing you catch in development where you reload every 30 seconds.
2. The ESLint rule was technically correct. The react-hooks/exhaustive-deps rule would flag marketData as a missing dependency if you removed it. The linter was right that the closure referenced marketData. The fix wasn't to suppress the rule — it was to restructure the code so marketData wasn't needed in the closure at all.
3. Heap snapshots are noisy. A trading page has thousands of legitimate objects — DOM nodes, React fibers, WebSocket message buffers, chart candle data. Finding the illegitimate closure chains — the ones that should have been collected — requires comparing snapshots and following retaining paths through layers of closures and React internals. It's not something you can grep for.
Lessons
Use functional updaters for state that changes frequently
If your useEffect subscribes to a data stream and updates state on each message, always use the (prev) => newState pattern. Never reference the current state variable in the effect body when you're setting that same state. The functional updater always reads the latest value without creating a closure dependency.
// Bad: creates closure dependency, forces effect re-run
useEffect(() => {
socket.on('data', (d) => setState({ ...state, [d.key]: d.value }));
return () => socket.off('data');
}, [socket, state]); // state in deps = re-run on every update
// Good: no closure dependency, effect runs once
useEffect(() => {
socket.on('data', (d) => setState((prev) => ({ ...prev, [d.key]: d.value })));
return () => socket.off('data');
}, [socket]); // stableMemoize at the right granularity
React.memo on the table component wasn't enough. If the table re-renders, all 100 rows re-render. Memoizing individual rows meant that when BTC's price ticked, only the BTC row re-rendered. The other 99 rows — ETH, SOL, AVAX — were skipped entirely.
For components that render lists of items with frequent partial updates, memoize both the list container (to avoid unnecessary passes) and individual items (to skip unchanged rows).
Profile under realistic conditions
I only reproduced this bug by letting the page run for 15+ minutes with a live WebSocket feed. In development, where you reload constantly and test with small datasets, the memory growth was imperceptible. The lesson: for any page with a persistent connection and streaming data, profile it under sustained load. Let it run. Take snapshots at 1 minute, 5 minutes, 15 minutes. Compare them.
Heap snapshots > guessing
My first instinct was "it's probably the chart library." Charts render canvases, manage their own state, and are notoriously leaky. I spent half a day instrumenting the chart component before looking at heap snapshots. The snapshot pointed me to the growing closure chains in 10 minutes. Always start with the profiler.
The broader pattern
This bug is a specific instance of a general pattern in React applications with real-time data: effects that depend on the state they modify create hidden feedback loops. The loop doesn't look like an infinite loop because it's gated by external events (WebSocket ticks, timers, user actions). But over time, each iteration accumulates references that can't be cleaned up.
Any time you have a useEffect that both reads state and writes to that same state — especially if it's subscribing to a high-frequency data source — check whether the state variable actually needs to be in the dependency array, or whether you can use a functional updater to remove it.
In our case, the answer was always the functional updater. The one-line change that stopped the leak:
// Before
setMarketData({ ...marketData, [tick.symbol]: tick });
// After
setMarketData((prev) => ({ ...prev, [tick.symbol]: tick }));Remove marketData from the dependency array, and the subscription stops churning.
One line. Three days. A 60% memory reduction. That's production debugging.