In this series, we've been working through a structured approach to making better decisions. We started with The Decision Intelligence Paradox, then explored why AI transformation fails when the outcome isn't clear in The Outcome Problem, and showed how to define one in The Outcome Solution. In Defining Your Decision Space, we established the set of options. Most recently, The Priorities Problem named the failure: invisible drivers — incentives, values, principles — can't be balanced. This article outlines the five-step process OPERA uses to surface priorities, structure them, and move stakeholders from passive preference to active championship.
The shift from survey to structured debate is what changes everything. Not technology. Not a mandate. A process that surfaces first-principles drivers, gives them structure, and creates the conditions for influence.
"A debate over opposing issues was a vehicle for pursuing truth. Exposing the weakness in another’s argument was to strengthen and refine it, not dismiss it."
— The Next Conversation, Jefferson Fisher, on the discourse of the ancient Greeks
The Leadership Synthesis Bottleneck
For most of my career, surfacing what stakeholders actually cared about meant interviewing people one at a time — at NASA, at Fortune 500 firms — synthesizing what I heard, and handing a report to a leader who then had to figure out what to do with it. But only I could see the full picture. Stakeholders couldn't see each other's drivers. They couldn't influence each other. The leader always faced the same bottleneck: competing interpretations with no shared structure for resolving them.
Without that structure, there is misalignment. A structured priority process typically requires 2–3 hours of stakeholder time. The question is not whether you can afford to run it. The question is whether you can afford the alternative.
What Surveys Can't Surface
A reminder from the previous article: priorities are not initiatives, announcements, or project plans. They are the incentives, concerns, values, and needs that will determine whether a decision holds or falls apart. Surveys are the default tool for gathering them — and they work well when data needs to remain anonymous. But they have a structural limitation: each person's input is invisible to everyone else. In 2011, I saw this firsthand analyzing survey results from BART's Fleet of the Future where BART ran 12 community Seat Labs across the Bay Area where 2,200 riders tested seat mockups and responded to surveys. Two priorities came back that couldn't both be satisfied: fabric seats for comfort wanted by the rush hour commuters, while other riders championed wipeable surfaces for sanitation and presented in visceral detail the activities they witnessed that led to the dirt and grime build up. These incidents still live in my head.
The survey couldn't resolve it — each group's drivers sat in separate columns of a spreadsheet, invisible to each other. The conflict about seat material that the 2011 surveys couldn't resolve required a second full survey round in 2012 (1,200+ riders) before the board had enough confidence to act. This time BART surfaced the sanitation findings alongside the comfort data in a follow-up process, something shifted. Now 75% of riders preferred wipeable vinyl — not because comfort stopped mattering, but because the sanitation driver, once visible, reframed the question. The board voted to replace all 669 cars with cushioned vinyl. The decision held.
This iterative survey process — took multiple years before producing a decision. A structured priority process driven by debate as described below would result in a much faster decision by simply making these priorities visible.
Five Steps to Visible Priorities
Five-Step Process
Five steps move stakeholders from passive preference to active championship.
Step 1: Brainstorm
Every contributor surfaces their priorities in whatever language feels natural. No format required. The only instruction is: get it out of your head and into the room.
Brainstorming and structuring use different parts of the brain — separating them produces better output from both. A CFO might write "labor costs." A frontline worker might write "I just want to know what my job is." Both are valid starting points. Neither is a finished priority. That comes next.
If the brainstorm produces more than a dozen items, group similar drivers before moving to refinement — same first-principles concern, different language.
At this step, also identify non-negotiables — the legal and regulatory requirements (data security, SOX compliance) that any option must satisfy.
Brainstorm Canvas
We need to see where the money’s going
Data security concerns
Ship AI faster, less red tape
Speed up rollouts across BUs
People are pushing back on the tools
We have no real governance playbook
Stop people using random AI tools
Our training data is a mess
Teams don’t know how to use AI yet
Silos are killing cross-team AI work
Who’s watching the ethics side?
Too dependent on one vendor
Twelve priorities from three stakeholders — raw, unfiltered, and ready to refine.
Step 2: Refine
Each brainstorm item gets refined into two elements: a direction and a driver.
The driver is the first-principles reason this priority matters — not the surface label, but the real concern underneath it. "Crew retention" might mean: "I've lost three experienced people in two months and can't train replacements fast enough." That's a driver. Specific, real, and debatable in a way the label never was.
At NASA, engineers opposed a cloud computing initiative, arguing they preferred building their own machines. The surface priority was "local hardware." The actual driver was that cloud storage was perceived as inadequate for their data files. Until that concern was surfaced, the conversation couldn't move forward. Once it was, a hybrid solution emerged — an in-house, NASA-specific cloud system that all centers could use.
The direction is which way you want to move it: increase, decrease, maximize, minimize. If a priority doesn't have a direction, it can't be compared or modeled.
Refinement
to move this?
this person, right now?
One breach during AI processing triggers regulatory penalties and erodes years of trust.
Last audit found 23 unapproved AI tools processing company data.
Each brainstorm item gets refined into a direction and a first-principles driver.
Step 3: Debate
Stakeholders share their refined drivers with each other — and the debate does the work of alignment. This is where OPERAScale diverges most sharply from a survey.
The goal is not agreement. It is movement — toward one of the three resolution paths described at the end of this article. Think of it as a spectrum: at one end, a stakeholder merely wants something — a vague preference. In the middle, they need it — the gap is real and felt. At the far end, they've got to have it — the priority is part of their identity, and they're building toward it. The debate, fueled by first-principles evidence, is what moves people along this spectrum.
When a stakeholder shares a driver that others haven't considered, positions shift. The comfort rider who learns what late-night routes look like changes their stance — not because they were told to, but because they saw something they hadn't considered. That peer-to-peer influence is faster and stickier than anything coming from the top. Just this month Harvard Business Review validated this from survey results where 88% of employees who responded as heavy users of AI “described peers as influential—often citing concrete examples.”
Influence
Minimize time-to-value per AI use case
Minimize time-to-value per AI use case
Increase AI ethics oversight
Increase AI ethics oversight
Increase workforce AI literacy
Increase workforce AI literacy
Without a governance framework, fast deployment just means fast liability. We need guardrails before we scale.
Ethics oversight matters, but it’s a subset of governance — let’s consolidate so we don’t slow down the rollout.
If the workforce can’t adopt AI tools, none of our technical investments pay off. Literacy should be non-negotiable.
When stakeholders share first-principles drivers, positions shift — cards move across the spectrum as influence lands.
Step 4: Rank
After the debate, every stakeholder independently scores each priority using a practical scale — whether that's a 1–5 rating, high/medium/low, forced ranking, or MoSCoW categories. The specific mechanism matters less than two things: everyone scores, and the scores happen after the debate so they reflect shifted positions, not just opening bids.
My Rankings
Each stakeholder independently scores every priority — here, Sarah rates the team’s priorities on a 1–5 Likert scale after the debate.
Individual rankings are necessary but not sufficient. The real value comes when you aggregate them into a team-level view. Team consensus reveals which priorities have genuine support across the group and which were championed by one voice. It also serves as a filter: a brainstorm that produced twelve priorities typically narrows to three to five that the team collectively ranks as critical. Those are the priorities worth modeling against your options in the next step. Without this filtering, the mapping step becomes unmanageable — too many priorities across too many options, with no signal about which trade-offs actually matter.
Team Consensus
Aggregating individual rankings reveals which priorities have genuine team-wide support.
Step 5: Map to Options
In the previous article, we defined the decision space — the set of options on the table. Each priority is now scored against each of those options. The result is a map: options on one axis, priorities on the other, with each stakeholder's position visible to everyone.
When every stakeholder scores a priority the same way and one option satisfies all of them, the decision is straightforward. But when priorities conflict across options — one addresses role clarity but threatens schedule predictability, another protects schedules but ignores inherited errors — the map reveals exactly where the tension lives and what is being traded.
The mapping also serves as a filter. Fifteen priorities across five stakeholders naturally narrow to the three to five that genuinely drive the decision forward.
Priorities × Options Scoring Grid
| Build In-House | Partner Vendor | Hybrid Approach | |
|---|---|---|---|
| Governance framework | 3 | 2 | 5 |
| Workforce AI literacy | 2 | 4 | 4 |
| Cross-team collaboration | 3 | 3 | 5 |
Each priority scored against each option reveals where consensus exists and where trade-offs are needed.
Resolution Paths
Three resolution paths emerge from the process:
Alignment — priorities converge and a clear direction emerges. No further work needed.
Persuasion — evidence shared during the debate shifts positions enough that trade-offs resolve themselves.
Genuine conflict — priorities remain in tension even after the debate, and no single option satisfies every critical driver. This is not a failure of the process — it is exactly what the process is designed to surface. When this happens, cost estimates, schedule projections, and feasibility data enter the conversation and reframe the question from "whose priority wins" to "what can we actually deliver, and what does each option cost?" That analysis is the subject of the next article.
What Comes Next
Priorities tell you what matters. Analytics tells you what's possible. The next article moves from the debate to the model — where first principles data transforms competing priorities into actionable trade-offs.
If you're in the middle of an initiative that feels misaligned, here is one concrete step you can take today: Ask each stakeholder to name one priority they'd be willing to move on — and what evidence would move them. You'll surface drivers you didn't know existed and create the conditions for influence before you formalize anything.
To explore how the process applies to a specific initiative, reach out at hello@operascale.com.