I’ve always been skeptical about the utility of external benchmarks for employee experience (EX) surveys. I’m even more concerned by how much some companies are willing to pay for what they see as a “must-have” comparison, often without questioning what they’re really getting.

The two most common arguments I've heard in favor of benchmarks are:

  • “How will we know if a score is good or bad without benchmarks?”
  • “How will we know what target to set?”

These questions rest on a set of assumptions we rarely stop to examine.

The Benchmark Mirage

What’s rarely talked about:

  • Opaque data. Vendors don’t disclose which companies are included in external benchmarks. Even when benchmarks are segmented by industry and company size (e.g., Tech, 1,000–5,000 employees), the companies are often meaningfully different with respect to growth stage, business model, financial performance, and workforce composition (roles, functions, and geographies).
  • Law of small numbers. The more granular the benchmark (e.g., New Tech in the U.S. with 500 - 1,000 employees), the lower the number of contributing employers. Vendors typically don’t disclose how many organizations make up a given benchmark either, but five is often the minimum required to report a score. With a sample of this size, a single outlier can meaningfully influence the results. Are we comfortable letting our targets shift based on where the vendor's sales team happens to be most successful?
  • Sample bias. External benchmarks reflect the vendor’s customer base, not a representative cross-section of employers. Vendors often tout that their data aren’t collected from panels or synthetic sources, but that’s part of the problem. Are these companies truly your talent peers, or just companies that happened to buy the software?
  • Messy tagging. For companies with diverse subsidiaries, survey data are often tagged based on the industry or location of the parent company (as assigned by a CSM) rather than where or how the work is actually happening. Vendors often don’t know where individual respondents are based, so they assign the geography of HQ by default, even if most employees are elsewhere.
  • They’re stale. Benchmark refresh rates vary widely. Meanwhile, EX is shaped by fast-moving dynamics: RTO policies, geopolitical events, shifting administrations, and policy changes. By the time benchmarks are updated, they may no longer reflect current employee sentiment.
  • They dilute strategy. Benchmarks can crowd out the custom indicators that matter most. Do generic, benchmarkable items really provide more value than precise, high-impact measures tied to your culture, manager behaviors, or values?

What’s the real cost?

  • Benchmarks Encourage Mediocrity. If a score is “above benchmark,” leaders often see no reason to act, even when employees are signaling a clear pain point. It shifts the goal from excellence to adequacy.
  • Benchmarks Create Groupthink. They push companies to fix the same things, chasing what others prioritize instead of what matters most internally. That’s not strategy—it’s mimicry!
  • Benchmarks Are Politically Weaponized. “At benchmark” is frequently used to deflect feedback or justify inaction, even when internal trends are moving in the wrong direction.
  • Benchmarks Are Misinterpreted. Scores may be skewed by low-response data, and most platforms don’t evaluate representativeness across attributes that explain variance in scores. Also, small differences of 2–3 points are often treated as meaningful in the absence of transparency around sample sizes and standard deviations. That’s shaky ground for decision-making.
  • Benchmarks Flatten Complex Realities. A score of 72 in “manager support” may mean very different things for a dev team vs. a call center. Benchmarks don’t account for that nuance.
  • Internal Comparisons Are More Actionable. Comparing across teams, levels, and time provides sharper, more relevant insight. That’s where real understanding (and real change) happens.

Let’s be honest: Beating the benchmark doesn’t mean we’re doing well. 

It might just mean we’re slightly less bad than a group of struggling organizations that are actually pretty dissimilar from our own.

If respondents rate a survey item significantly lower than they rate other items—or if a large, representative segment scores significantly less favorably—do we really need an external benchmark to tell us where the biggest opportunity for improvement lies?

Better yet, if the survey item is a statistical driver of outcomes we want to influence (e.g., retention, engagement) based on “internal” data, would an external benchmark score change our perception of its importance?

Probably not.

What if we flipped the script?

What if we focused on being the best version of ourselves, tracking internal drivers over time, identifying statistically different hotspots, and measuring what matters most to our people?

Most folks I talk to agree with this in principle. But in practice? Benchmark pressure is real. There’s comfort in knowing how “we stack up”, even when the comparison isn’t valid.

I’d love to hear from others working in EX, people analytics, or employee listening:

  • Are external benchmarks central to your strategy, or a distraction?
  • How have you helped stakeholders shift toward a more meaningful focus on internal trends, distributions, and context-based insights?