Trishnangshu Goswami
Back to writing

Why Sentry Was Showing 11-Second Button Clicks: Debugging Bad Data, Multiple Bundles, and an 82% Size Cut

March 11, 2026·Trishnangshu Goswami
ObservabilityBundle OptimizationDeveloper Experience

Our Sentry performance dashboard said a button click took 11 seconds. Another one said a modal open took 57 seconds. A page navigation apparently took over a minute. None of this was real. The app felt fine in production. But the data was consistently, confidently wrong.

For weeks, we assumed it was noise — outliers from slow devices or flaky networks. Then I noticed a pattern: the bogus data points clustered around specific interactions, and they always appeared in the first few seconds of a session. Something about our Sentry setup was producing garbage metrics, and we'd been ignoring it.

This is the story of how I traced the problem to multiple Sentry SDK bundles loading simultaneously, why fixing it cut our Sentry payload from 400KB to 70KB gzipped (82% reduction), and why the invalid data points disappeared entirely within days of the deploy.

The investigation

The app was a React SPA built with Webpack and CRACO. We had Sentry for error tracking and performance monitoring. The errors side worked — we could see exceptions, stack traces, breadcrumbs. But the performance data was unreliable. Click handlers that took milliseconds in DevTools were showing up as 11-second transactions in Sentry. Page transitions that felt instant were logged as minute-long operations.

I started the way you start any frontend performance investigation: open the bundle.

npx webpack-bundle-analyzer build/stats.json

We already had stats.json output configured in our CRACO build. I was originally looking at the bundle to understand our total payload — unrelated to Sentry. But when the treemap rendered, something jumped out immediately.

There wasn't one @sentry chunk. There were multiple Sentry chunks scattered across the bundle — the core SDK, the browser tracing integration, the replay module, and duplicated fragments of shared Sentry internals. Some of them were near-identical copies of the same code, pulled in through different import paths.

The entire @sentry footprint was 400KB gzipped. For context, our application code — all components, pages, hooks, services — was about 280KB. Sentry alone was larger than our app.

That's when I started looking at how Sentry was being imported and initialized.

What was wrong

Multiple wildcard imports creating duplicate bundles

Across the codebase, I found import * as Sentry from '@sentry/react' in 12 different files. Every component, service, or utility that needed error capture pulled in the entire namespace:

// Found in 12 files across the codebase
import * as Sentry from '@sentry/react';

// Used for different purposes in different files:
Sentry.captureException(error);
Sentry.captureMessage('Order completed');
Sentry.setUser({ id: userId });
Sentry.withScope((scope) => { /* ... */ });

The @sentry/react package is not small. It includes:

  • Core error capture and transport
  • Browser tracing (Performance API, XHR/fetch instrumentation)
  • Session replay (DOM recording, serialization, compression)
  • React-specific integrations (error boundaries, profiler)
  • Legacy browser polyfills
  • Source map upload utilities

When you use import * as Sentry, Webpack cannot determine which exports are actually used. It marks the entire module as referenced, disabling tree-shaking for that dependency. Worse, when multiple files do this through slightly different dependency chains, Webpack can end up creating duplicate module entries — the same Sentry internals get bundled more than once.

This was the first problem: we were shipping multiple copies of Sentry's tracing and replay code because wildcard imports prevented Webpack from deduplicating or tree-shaking any of it.

Late initialization inside useEffect

The second problem was how and when Sentry was initialized:

// App.tsx
import * as Sentry from '@sentry/react';
import { useEffect } from 'react';

function App() {
  useEffect(() => {
    Sentry.init({
      dsn: 'https://examplePublicKey@o0.ingest.sentry.io/0',
      integrations: [
        Sentry.browserTracingIntegration(),
        Sentry.replayIntegration(),
      ],
      tracesSampleRate: 0.1,
      replaysSessionSampleRate: 0.01,
    });
  }, []);

  return <RouterProvider router={router} />;
}

Sentry.init() was inside a useEffect. That means:

1. Browser downloads and parses the JavaScript bundle 2. React loads, renders <App /> and its full component tree 3. The browser paints 4. useEffect fires — Sentry.init() finally runs 5. Sentry starts instrumenting fetch, XMLHttpRequest, and click handlers

Any error or interaction that happens in steps 1–3 is either missed or misattributed. But the bigger issue was what happened to browser tracing.

Sentry's browserTracingIntegration works by monkey-patching the browser's native APIs — fetch, XMLHttpRequest, addEventListener — so it can create spans around network requests and user interactions. When those patches apply late (after the app has already rendered and attached its own event handlers), the timing instrumentation doesn't wrap the original handlers correctly. It intercepts events after the app's own listeners have already started processing, or it starts measuring from the wrong point in the event lifecycle.

The result: a button click that takes 50ms in reality gets reported with a span duration of 11 seconds — because Sentry's timing started from some earlier, unrelated instrumentation point, or because the transaction boundary was incorrect due to the late init.

This is what was producing the garbage data. Multiple Sentry bundles loading with overlapping instrumentation, combined with an init that ran after the app was already interactive, created a state where performance spans had incorrect start times, duplicated transaction contexts, and inflated durations.

Unnecessary plugins inflating the entry bundle

The third issue was simpler: our Sentry config loaded every available plugin eagerly. The Replay integration, the Feedback widget, the Profiling integration — all initialized upfront even though only the core tracing and error capture were needed on initial load.

// Everything loaded at init time, whether needed on this page or not
integrations: [
  Sentry.browserTracingIntegration(),
  Sentry.replayIntegration(),       // Full DOM recording — do we need this on every page?
  Sentry.feedbackIntegration(),     // Widget for collecting user feedback
],

Each integration pulled in its own dependency chain. The Replay integration alone brought in a DOM serializer, a compression library, and a custom event recorder — roughly 120KB gzipped of code that ran on every single page load.

The fix

Step 1: Move Sentry init to the top level

Per Sentry's own documentation, the SDK should initialize before the application renders. Not inside a component. Not in a lifecycle hook. At the module level, as the very first import.

I created a standalone Sentry config module:

// sentry.ts — standalone module, no React dependency
import {
  init,
  browserTracingIntegration,
} from '@sentry/react';

init({
  dsn: 'https://examplePublicKey@o0.ingest.sentry.io/0',
  integrations: [
    browserTracingIntegration(),
  ],
  tracesSampleRate: 0.1,
  environment: process.env.REACT_APP_ENV,
});

And imported it as the first line in the entry point:

// index.tsx — sentry import FIRST
import './sentry';  // Side-effect import — runs immediately
import React from 'react';
import { createRoot } from 'react-dom/client';
import App from './App';

const root = createRoot(document.getElementById('root')!);
root.render(<App />);

ES module imports execute in declaration order. import './sentry' runs its module body synchronously before anything else loads. By the time React initializes, Sentry has already patched fetch, XMLHttpRequest, and the global error handlers. Every network request, every click, every unhandled exception from this point forward is instrumented correctly.

No more late init. No more timing mismatches.

Step 2: Replace all wildcard imports with named imports

I searched the entire codebase for wildcard Sentry imports:

grep -rn "import \* as Sentry" src/ --include="*.ts" --include="*.tsx"

12 files. Every one was changed to import only the specific functions it used:

// Before — every file pulled in the entire SDK
import * as Sentry from '@sentry/react';
Sentry.captureException(error);
Sentry.captureMessage('User completed onboarding');
Sentry.setUser({ id: userId });

// After — targeted named imports
import { captureException } from '@sentry/react';
captureException(error);

import { captureMessage } from '@sentry/react';
captureMessage('User completed onboarding');

import { setUser } from '@sentry/react';
setUser({ id: userId });

With named imports, Webpack can trace exactly which exports are referenced. The Session Replay serializer? Not imported anywhere now — tree-shaken. The legacy polyfills? Only the init module references browser tracing — everything else is stripped. The feedback widget? Removed entirely since we weren't actively using it.

This also eliminated the duplicate bundle entries. With import *, Webpack had been creating separate module references for files that imported from @sentry/react vs @sentry/browser vs @sentry/core — even though they resolved to overlapping code. Named imports let Webpack's module graph resolve these correctly into a single shared chunk.

Step 3: Load heavy plugins only where needed

Instead of loading the Replay integration on every page, I moved it to a lazy-loaded config that only activates on specific routes where we actually need session recording:

// sentry.ts — core init, minimal plugins
import { init, browserTracingIntegration } from '@sentry/react';

init({
  dsn: '...',
  integrations: [
    browserTracingIntegration(),
    // Replay loaded separately, only on pages that need it
  ],
  tracesSampleRate: 0.1,
  environment: process.env.REACT_APP_ENV,
});
// Only on pages where session replay is needed
import { replayIntegration } from '@sentry/react';
import { getCurrentScope } from '@sentry/react';

getCurrentScope().getClient()?.addIntegration(replayIntegration({
  maskAllText: false,
  blockAllMedia: false,
}));

This kept the Replay code out of the entry bundle entirely — it only loads when a user navigates to a page where we've enabled it.

Step 4: Verify the bundle

After all changes, I ran the bundle analyzer again:

npx webpack-bundle-analyzer build/stats.json

The treemap told the story. Where there had previously been multiple scattered @sentry chunks — some of them near-identical duplicates — there was now a single, compact chunk. The Sentry footprint dropped from 400KB to 70KB gzipped. An 82% reduction.

What was stripped:

  • Session Replay DOM serializer (moved to lazy load): ~120KB
  • Legacy browser polyfills (not needed, we target modern browsers): ~80KB
  • Unused integrations (profiling, feedback widget): ~90KB
  • Duplicate module entries from wildcard imports: ~40KB

What remained: core error capture, transport, browser tracing integration, and the React error boundary — the 70KB we actually needed in the entry bundle.

The invalid data disappeared

We deployed the fix and waited. After three days of monitoring, the garbage performance data was gone. No more 11-second button clicks. No more minute-long page transitions. The Sentry performance dashboard finally showed numbers that matched reality.

The root cause was clear in retrospect: multiple Sentry bundles loading simultaneously meant multiple instances of the tracing instrumentation were patching the same browser APIs. Two copies of browserTracingIntegration wrapping the same fetch call would create overlapping spans with incorrect start/end times. The late useEffect init compounded it — by the time Sentry's patches applied, the app's own event handlers were already attached, creating a timing mismatch that inflated span durations.

With a single, early-initialized Sentry instance and deduplicated bundles, every span had exactly one source of truth for timing. The data was finally clean.

We also observed an improvement in LCP — down to 6 seconds on a simulated slow 4G connection. Removing 330KB of gzipped JavaScript from the critical path meant the browser could parse and execute the actual application code faster, reducing the time to first meaningful paint.

Why this happens so often

I've since reviewed Sentry setups in other codebases and found similar patterns. It's common because:

1. The docs show both patterns. Sentry's React documentation shows framework-specific setup with Sentry.init() inside the app component. Their general JavaScript docs recommend pre-app initialization. If you start from the React guide (which most React developers do), you naturally end up with late init inside useEffect.

2. `import *` is the path of least resistance. The first time you need captureException in a new file, autocomplete suggests import * as Sentry from '@sentry/react'. It works. You don't feel the cost because tree-shaking failures don't produce warnings — your app still builds, still runs, just ships 300KB more than it needs to.

3. The bad data looks plausible enough to ignore. An 11-second click sounds absurd, but a 3-second click on a slow device? That's within the range of "maybe." The garbage data doesn't always look like garbage. It often looks like edge cases, and edge cases get dismissed.

Lessons

Initialize observability before your application

This applies to Sentry, Datadog, LogRocket, or any monitoring SDK. If it instruments browser APIs for timing and error capture, it must load first. Not in a component lifecycle. Not after the framework initializes. Before everything.

In a Webpack/CRACO app, that means a dedicated side-effect module imported at the very top of your entry file. In Next.js, it means instrumentation.ts or the Sentry plugin. The principle is the same: monitoring code runs before application code.

Audit your wildcard imports

Run your bundle analyzer. Search for import * in your codebase:

grep -rn "import \* as" src/ --include="*.ts" --include="*.tsx"

Every wildcard import is a tree-shaking barrier. For large packages like Sentry, the cost can be hundreds of kilobytes — and worse, it can cause Webpack to create duplicate module entries that inflate your bundle beyond what a single import * would suggest.

The fix is mechanical: replace import * as X with import { a, b, c } from X. It's tedious but the bundle impact is immediate.

Don't dismiss anomalous monitoring data

When your monitoring tool shows data that doesn't match reality, the instinct is to filter it out or blame edge cases. Sometimes the data is telling you something about the monitoring tool itself. In our case, the 11-second clicks weren't from slow devices — they were from a broken instrumentation setup producing garbage timing data. The anomalies were the symptom; the duplicate SDK bundles were the disease.

The numbers

  • Metric: Sentry bundle size (gzipped) | Before: 400KB | After: 70KB
  • Metric: Bundle reduction | Before: — | After: 82%
  • Metric: Sentry chunks in bundle | Before: Multiple (duplicated) | After: Single
  • Metric: Wildcard imports (import *) | Before: 12 files | After: 0
  • Metric: Bogus performance data points | Before: Frequent (11s clicks, 1min navs) | After: None

The 82% bundle reduction was the measurable win. But the real value was in data trust. For weeks, we'd been second-guessing our performance dashboard, dismissing data points as noise, making decisions based on metrics we couldn't rely on. After the fix, every number in Sentry matched what we observed in DevTools. We could finally use our monitoring data to make real performance decisions — because the data was actually real.