The One Key Question to Ask When Measuring Your Own Productivity

by Edmond Lau

Photo credit: Sergey Zolkin

In his book The Dilbert Principle, Scott Adams re-shares a cautionary tale from one of his comic strip readers on measuring productivity.

An engineering manager wanted to incentivize his team to find and fix software bugs and instituted a program to reward strong performers. To encourage the quality assurance team to find bugs, he rewarded $20 for every bug uncovered. To encourage engineers to fix more bugs, he rewarded $20 for every bug fixed. 1

It’s not difficult to imagine what happened next. Engineers started to purposefully introduce bugs for testers to discover and for themselves to fix. One engineer even established an underground market for his bugs and earned $1700 after one week. The well-intentioned program shuttered soon afterwards.

This amusing little story illustrates a critical point: how you measure your productivity changes how you behave. Whenever you’re measuring productivity, ask yourself this one key question, “What’s the right metric that will incentivize the behavior that I want?”

The Right Metric Depends on Your Goals

Most engineers would scoff at the idea of measuring their output based on lines of code written. It’s an absurd metric, right? How could a metric that ignores code quality and code impact make any sense?

And yet — there’s a time and place where purely measuring the raw quantity of output can be valuable.

When you’re first learning something new — whether it’s programming, drawing, acting, driving, or something else — the raw number of programs you write, drawings you sketch, scenes you perform, and hours you spend on the road all measure something critical. They track whether you’re getting the practice you need to develop those skills. That’s why habits like dancing 365 days a year or building 180 websites in 180 days can be so effective for new learners. Those same metrics lose much of their power, however, once you reach a comfortable level of skill — afterwards, it’s more useful to deliberately practice skills that push you out of your comfort zone, and raw output is no longer a strong indicator of improvement. 2

Similarly, when quality is not the main concern, a quantity-based metric can be useful. For example, when I was writing the first draft of my book The Effective Engineer, quality didn’t matter nearly as much as completing a draft to visualize how all the pieces fit together. By measuring my daily productivity with a goal of writing 1,000 new words per day, I focused on producing new material rather than spending my time rewriting and revising work from the day before — which many writers are apt to do.

Metrics that at first glance appear absurd can therefore still play important roles. Whether a metric makes sense depends on your goals — and as your goals change, old metrics may outlive their usefulness. So how do we take this lesson and apply it to engineering productivity?

Figure Out Your Goals, Then Work Backwards to Derive Your Metric

As an engineer, your goals — and therefore the right metrics — will vary over time. At any time, your goal might be to:

  • Learn a new framework or programming language. For learning, tracking a simple metric such as a minimum number hours per day may suffice.
  • Reduce distractions while working. Then using a Pomodoro timer 3 to track the number of deep, productive hours worked — and spent away from Facebook, Twitter, email, or web surfing — can be a good start.
  • Speed up the performance of a critical subsystem. Tracking how much your efforts move the needle on average or 95th percentile latency or whatever performance bottleneck you’re facing will definitely play an important role. Or if you know exactly what you want to optimize for, you might go much deeper, like how Instagram optimized for CPU instructions per active user at peak minutes. 4
  • Improve the quality of search results. You’ll want to track how much of a dent you can make on click-through rates, or perhaps borrow a lesson from Google and track long click-through rates — clicks where someone clicks on a result and doesn’t bounce back for some amount of time.
  • Increase user growth and engagement. Depending on which part of the funnel currently matters most for your business, you’ll measure your impact on signup or payment conversion rate, the growth rate of weekly active users, week-over-week retention, or some other part of the funnel.
  • Increase server reliability. If you already have good monitoring, perhaps you might measure pager duty alerts per week. Depending on whether you care more about minimizing customer disruption or disruptions to your teams’ lives outside of work, you may also apply more weight to alerts that fire during business hours or off-peak hours.

Notice that these metrics for your impact are all domain-specific. Moreover, for any goal, there’s a wide array of metrics to consider, each with their own associated incentives.

For example, if you’re working on application performance and focus on improving average latency, you’ll generally make system-level optimizations that will help reduce overall CPU, memory, or bandwidth costs. On the other hand, if you focus on improving 95th percentile latencies, you’ll mostly focus on fixing the worst-case performance issues. These issues typically have a much bigger impact on your most engaged users — people who make the most queries, load the most pages, follow the most people, or generate the most data and therefore are more computationally expensive to handle.

To figure out what to measure, start by identifying the goal you’re trying to achieve. Then ask yourself, “If I could pick one and only one metric to systematically increase over time, what would lead me to make the greatest and most sustainable impact toward my goal?”

Your answer is the productivity metric you should track. And the work that moves that metric the most is the highest-leverage activity that you should focus on.


“A comprehensive tour of our industry's collective wisdom written with clarity.”

— Jack Heart, Engineering Manager at Asana

“Edmond managed to distill his decade of engineering experience into crystal-clear best practices.”

— Daniel Peng, Senior Staff Engineer at Google

“A comprehensive tour of our industry's collective wisdom written with clarity.”

— Jack Heart, Engineering Manager at Asana

“Edmond managed to distill his decade of engineering experience into crystal-clear best practices.”

— Daniel Peng, Senior Staff Engineer at Google

Grow Your Skills Beyond the Book

Listen to podcast interviews with top software engineers and watch master-level videos of techniques previously taught only in workshops and seminars.

Leave a Comment