Every Quarter We Set OKRs. Every Quarter We Forget Them. Every Quarter We Set New Ones.

Culture

In January 2024, a mid-size fintech company in Denver flew forty-three managers to a resort in Scottsdale for a two-day OKR-setting offsite. The event cost $41,200, including the facilitator, the breakout rooms, the dry-erase markers that nobody uncapped after lunch on day one, and the catered dinner where the VP of Engineering told the VP of Product that his proposed key result was "not measurable enough," initiating a silence that lasted through dessert. By February 12th, fewer than half the attendees could name their team's top objective without checking Confluence. By March, the page in Confluence had not been updated since the offsite. Nobody noticed, because nobody was looking at it.

This is not a failure story. This is the story of how OKRs work at most companies. The system functions exactly as implemented. It just has nothing to do with objectives or key results.

The Cascade Problem

OKRs are supposed to cascade. The CEO sets company-level objectives, which decompose into departmental objectives, which further decompose into team and individual objectives. In theory, this creates alignment: every person's work connects to the organization's strategic priorities through an unbroken chain of intent. In practice, it creates a game of telephone where ambition is converted into task lists through a series of increasingly desperate translations.

Consider a real cascade we documented at a Series C enterprise software company. The CEO's objective: "Become the category leader in developer productivity." The CTO interpreted this as "Ship three platform differentiators by Q3." The Director of Engineering translated it to "Reduce deployment pipeline latency by 40%." The engineering manager converted it to "Migrate CI/CD to the new infrastructure." By the time it reached the individual contributor level, the key result was "Update the FAQ page to reflect the new deployment process." An IC in Austin was updating a help document that fourteen people read per month, and this activity was, through six layers of organizational abstraction, how the company was becoming a category leader.

Nobody in this chain was wrong. Each translation made local sense. The problem is that strategic objectives do not survive decomposition the way physical forces do. When you divide a hundred pounds across ten shelves, each shelf holds ten pounds. When you divide "become the category leader" across ten teams, each team holds something that is no longer recognizable as the original intent. The weight doesn't distribute. It evaporates.

The Mid-Quarter Check-In

Six weeks into the quarter, someone in operations or the chief of staff's office sends a calendar invite titled "OKR Check-In" or, at companies that have internalized a particular dialect of corporate optimism, "OKR Celebration." The meeting lasts thirty minutes. It could last four.

The format is consistent across organizations. A team lead shares their screen, pulls up the OKR tracking spreadsheet, and reads each objective aloud. For each key result, they offer a status: green, yellow, or red. Roughly 60% are yellow, meaning "we have not started this in any meaningful way but it is not yet late enough in the quarter to call it red." The remaining results split evenly between green ("this was already done before the OKR was written") and red ("this turned out to depend on a different team that has different OKRs").

The interesting moment in these meetings is the one that doesn't happen. Nobody asks whether the objectives still matter. Market conditions may have shifted. A competitor may have launched. A key hire may have fallen through. The OKRs were written in a world that no longer exists, but the tracking spreadsheet does not have a column for "the premise of this objective has been invalidated by events." So the team adjusts their confidence scores, notes some blockers, and moves on. The meeting ends two minutes early, which everyone treats as a small victory.

The 0.7 and the Consultant Who Left

At the end of the quarter, OKRs are scored on a scale from 0.0 to 1.0. The target, per the methodology that Google popularized, is 0.7. A score of 1.0 means you set your targets too low. A score below 0.5 means something went wrong. A score of 0.7 means you were ambitious but realistic. This is the intended use.

In practice, everything gets a 0.7. Teams that completed all their key results grade themselves 0.7 because scoring higher feels immodest and invites harder targets next quarter. Teams that completed nothing grade themselves 0.7 because scoring lower triggers a review conversation nobody wants to have. The scoring system, designed to provide signal, produces noise calibrated to a single frequency. A manager at a logistics company told us his team had given themselves a 0.7 for six consecutive quarters. "We've found our number," he said, without irony.

The gap between Google's OKRs and everyone else's OKRs is worth examining. At Google, OKRs were introduced to a company where engineers had unusual autonomy, where individual projects could influence products used by billions, and where the connection between an IC's output and company value was often direct and measurable. The system worked because the environment was unusual. When a management consultancy named Stratos Peak Partners charges $185,000 to bring that system to a 900-person insurance company in Columbus, they are selling a practice without its preconditions. The consultant who runs the three-day workshop is articulate, persuasive, and genuinely knowledgeable. She leaves on Friday with a signed statement of work for the follow-up engagement. The follow-up engagement is never scheduled. The insurance company is left with a Notion template, a Slack channel called #okr-champions that goes quiet by week four, and objectives that will be dutifully set, ignored, scored, and replaced every ninety days in perpetuity.

What OKRs Actually Measure

After interviewing thirty-seven managers across twelve companies, we found one consistent outcome of the OKR process: it reliably measures an organization's ability to run the OKR process. Companies that score well are companies that are good at filling in the spreadsheet. Companies that struggle with OKRs are companies that are bad at filling in the spreadsheet. The correlation between OKR scores and business outcomes, revenue, retention, product quality, customer satisfaction, was, in our admittedly informal analysis, zero.

This does not mean OKRs are worthless. The quarterly planning offsite forces cross-functional conversations that might not otherwise happen. The act of writing down priorities, even priorities that will be abandoned, creates a temporary clarity that has real value in the week it occurs. The check-in meetings, lifeless as they are, provide a recurring excuse for managers to talk to each other about what their teams are actually doing, which is often news to everyone involved.

The cost is not that OKRs fail. The cost is that they succeed at being a ritual, and rituals absorb the energy that might otherwise go toward the harder work of actual prioritization. Every hour spent debating whether a key result is "measurable enough" is an hour not spent asking whether the objective matters at all. The framework provides the satisfying feeling of strategic rigor without requiring any. It is the organizational equivalent of making your bed: it looks like you have your life together, and sometimes that is enough.

Next quarter, the offsite is already booked. The resort in Scottsdale offered a returning customer discount.

More From the PoopOS Blog

We write about the things everyone in your organization already knows but nobody is allowed to say in a meeting.