The Natural limit of Scaling

As a general rule, product development prioritizes scale today. We accept this as a given—it's seems obvious.

But it isn't. Product development could prioritize usability. Or minimal total cost of ownership. Or technical advancement.

I was struck by this while reading a post on Mind the Product about how usage data should be used to counter hype and noise when planning product features (at least for organizational software). I realized that while that sounds good, it represents a choice—and not a reflexively good one.

There's something uncomfortable that needs to be talked about if the system is going to prioritize scaling, and that's this: work does not get done evenly. This is true for two reasons: first, because of differences in ability amongst employees, and second, because of divisions of labor. Both represent natural limits on user adoption. So if we're going to choose to design for scale, then we need to understand the real impact of operating as if 100% penetration is both attainable and good.

Differences in Employee Quality

With regards to quality, top employees accomplish an outsized amount of a team's goals. (Personnel Psychology found that the top 5% of workers are responsible for 26% of what gets done. Other research I've seen has shown even bigger spreads.) To some extent, getting everyone using new software can help close that gap. That's the idea behind the tale of Paul Bunyon or John Henry. But beyond a certain point, the relentless focus on scale means that customers' results take a backseat to the idea that everyone will be better off using the technology. And that's just not true—at a certain point it makes more sense to move on to the next problem rather than continue to chase more users. That's when driving adoption becomes a drain on buyers' businesses.

Now, at this point, a good product manager will counter, We can avoid that problem by watching for those inflection points in adoption curves beyond which we start to see diminishing returns. But there's a difference between information that can be used and information that actually is used that way. And I've personally witnessed too many discussions in which diminishing returns are seen as temporary plateaus—with the next wave of new users just around the corner, waiting to be unleashed with just the right tweak to functionality or design—to believe that we're much good at knowing when to say when. 

Divisions of Labor

The second issue is with division of labor. Think of a team you've been on where you could identify the power user of a particular system. (Maybe it was you.) And think of someone else on that team who didn't use the software but who performed a different, critical job that didn't require them to use the software as much. Would you want the software modified to the needs and usage of the non-user? Or to the power user? Because using usage statistics with a goal of getting everyone into the system plays to the needs and habits of the non-user, and that's not always good for your team. 

I'm not talking about big divisions within organizations here—I think Salesforce knows that they won't have huge penetration amongst engineers, for instance—I'm talking within teams. At Brand Amper, I built most of our decks. I'm pretty good at Keynote, time was always of the essence, and neither my cofounder nor I really wanted her using her time to replicate a skill set I already had. In fact, keeping her out of the decks let her focus on other woks and helped us divide our time to maximize both of our skills. So if Apple has a scale mentality, they're eventually going to try to get my cofounder using Keynote to drive their usage statistics. That'd look good on a chart on some product manager's report, but wouldn't be at all good for our team. We had a division of labor that worked for us and we wanted no change to our workflow, and no new learning curves for either fo us.

So... why such tunnel vision on scale?

So why the unflinching focus on scale, even when it no longer serves the end user or their organization? Because money.

Investor money, specifically.

The way money flows into technology, investors look for software that can be deployed against a "total addressable market" (TAM) and then measure success based on what percentage of the TAM actually uses said software. Period. It's a race for scalability not because it helps people, but because it's the metric that allows the investor to sell for the most money. And there's no offsetting interest.

There's a natural limit to how far software should scale. We're at a place today where we assume that limit is 100% or as close to it as possible, but it's not.

So if you tell a product manager that you love or hate something about their software that requires a chance, and they tell you your experience is anecdotal, then ask for the assumptions behind their usage model, how they've correlated usage patterns with business results, and what they believe the natural upper limit of usage is for their product in a company like yours, or a team like yours. 

Because until they can do that with confidence—until they can show how they temper the chase for scale with a realistic understanding of differences in employee quality and divisions of labor—then I'm not convinced that their numbers are inherently any better.

Jason Seiden