The RevOps Lever Stack: A Smarter Way to Decide What to Fix Next
Welcome to The RevOps Leader, where every week we listen to dozens of RevOps podcasts and extract the top actionable ideas. (For more context on these ideas, give the podcasts a listen)
In this issue:
Deal quality: The missing link in sales forecasting
Five AI strategies for RevOps leaders
The RevOps lever stack: A framework for knowing which levers to pull
1. Deal quality: The missing link in sales forecasting
The RevOps Show, Episode 114: The Missing Link in Your Sales Forecasting - Deal Quality (Oct. 24, 2025)
TLDR
Forecasting based on pipeline stage and seller confidence misses the most important factor: whether the deal is actually qualified
Deal quality scoring requires objective criteria like: budget confirmed, decision process understood, and technical requirements validated
Companies implementing quality-based forecasting see 20-30% improvement in forecast accuracy within one quarter
Your forecast is wrong. And the reason isn't bad sellers or lazy reps. It's that you're measuring the wrong thing.
The pipeline stage illusion
Most forecasting models work like this: A deal moves through stages (discovery, demo, proposal, negotiation), and each stage gets a probability percentage. Discovery = 20%, Demo = 40%, Proposal = 70%.
Sales leadership reviews pipeline, applies these percentages, and voila – forecast complete.
The problem? Pipeline stage tells you where a deal is in your process, not whether it's actually qualified to close.
A deal in "proposal" stage with no budget confirmed, no decision timeline, and only one stakeholder contact will not close at 70%. It won't close at all. But it shows up in your forecast as a likely win.
What is deal quality really?
Deal quality isn't about stage progression. It's about whether fundamental qualification criteria have been met:
Budget confirmed: Not "they seem interested" or "they mentioned a budget range." Has the buyer explicitly confirmed they have approved budget for this specific purchase?
Decision process understood: Do you know exactly who needs to approve, what their evaluation criteria are, and what timeline they're working against?
Champion identified: Is there a specific person inside the buyer organization actively advocating for your solution? Have they explicitly agreed to champion it?
Technical requirements validated: Has the buyer tested your solution against their actual use cases? Have technical objections been surfaced and resolved?
Competition mapped: Do you know what alternatives they're considering? What's driving their evaluation?
Compelling event: Is there a specific business trigger forcing them to make a decision soon?
These aren't stage indicators. They're qualification indicators. And they're binary: Either you've confirmed them or you haven't.
The forecasting breakthrough
Companies that rebuild forecasting around deal quality instead of pipeline stage see dramatic improvements:
Forecast accuracy improves 20-30% within one quarter
Sales and finance align because they're using objective data, not opinions
Deal reviews focus on qualification gaps instead of stage progression
Slipped deals decrease because quality issues get surfaced early
One company shared this example: A deal in late-stage negotiation looked strong based on stage and seller confidence. But the deal quality score showed red flags: No confirmed budget, unclear decision process, and weak champion commitment. RevOps flagged it as high-risk. But sales pushed back (the rep insisted it would close).
It didn't. The buyer went silent two weeks before the expected close date.
Quality scoring predicted the outcome. Stage-based forecasting missed it completely.
The enablement unlock
Quality scoring doesn't just improve forecasting. It creates a diagnostic framework for enablement:
Reps consistently missing "budget confirmed" need training on how to have financial conversations
Deals scoring low on "champion identified" reveal relationship-building skill gaps
Weak "decision process understanding" signals discovery question deficiencies
You can identify specific skill gaps by analyzing quality score patterns across reps and deals.
What about AI and predictive analytics?
Quality scoring is the foundation that makes AI useful. If you feed a machine learning model garbage data (stage-based probabilities), you get garbage predictions.
But if you feed it clean quality data – objective, binary qualification indicators – the model can find patterns you miss. Maybe deals without budget confirmed but with strong technical validation close at 30% in Q4 but only 15% in Q2. The AI can surface those nuances.
The key: Start with quality scoring manually. Get it working. Then layer in AI to find patterns and improve the model.
2. Five AI strategies for RevOps leaders
Triario, Webinar: The future of RevOps: 5 actionable AI-driven strategies (Oct. 2, 2025)
TLDR
95% of AI pilots fail because companies bolt AI onto existing processes instead of reimagining workflows for AI-first thinking
Start with high-effort, high-judgment tasks like pipeline QA and lead scoring rather than low-value automation like task creation
The maturity journey has four stages: manual chaos, process definition, automation, and finally AI-powered optimization
Most executives have been tasked with implementing AI in their GTM function. And most of them are overwhelmed, don't know where to start, and end up implementing AI in ways that deliver minimal value.
Why 95% of AI pilots fail
That's not a typo. 95% of AI initiatives fail to deliver P&L improvements. And the reason is simple: Companies are using AI to bolt onto existing processes instead of reimagining how work should be done.
Someone hits an arbitrary trigger (right industry, right title), and AI fires off a personalized email.
The problem? AI-generated messaging isn't valuable because of volume. It's valuable because of relevance. If you're using AI to send more irrelevant emails faster, you've missed the point.
Start with the right problems
The companies succeeding with AI aren't starting with low-effort tasks. They're starting with high-effort, high-judgment problems that drain time and mental energy:
Strategy 1: Use AI for lead enrichment and qualification
Instead of having reps manually research prospects and guess at fit, use AI to:
Pull firmographic and technographic data from multiple sources
Score leads based on fit criteria (company size, tech stack, hiring patterns)
Prioritize outreach based on likelihood to engage
This isn't about generating more leads. It's about focusing human effort on the highest-potential opportunities.
Strategy 2: Implement AI-powered pipeline QA
Train AI to analyze your historical closed-won and closed-lost deals to identify quality indicators:
What patterns predict closes? (Multi-threading, economic buyer engagement, etc.)
What signals reveal risk? (Stalled activity, single-threaded relationships)
How do your current open deals score?
This gives you objective deal quality scores separate from seller confidence or pipeline stage.
Strategy 3: Surface forecast risks with AI
Don't just forecast based on pipeline stage. Use AI to:
Analyze deal velocity patterns (time-in-stage vs. historical norms)
Track buyer engagement trends (increasing or decreasing activity)
Identify deals that look similar to past slipped or lost opportunities
This turns forecasting from a prediction exercise into a diagnostic tool that tells you where to focus.
Strategy 4: Analyze enablement effectiveness
Use AI to review sales calls and email threads to understand:
Which messaging resonates most with buyers
What objections are most common and how they're being handled
Where reps are struggling vs. excelling
This makes enablement data-driven instead of assumption-driven.
Strategy 5: Personalize at scale
Use AI to customize outreach based on:
Specific buyer context (recent company news, tech stack, stated priorities)
Persona-specific messaging (what matters to this specific role)
Stage-appropriate content (awareness vs. decision-stage materials)
The key: Don't use AI to send more generic messages. Use it to send fewer, highly relevant messages.
3. The RevOps lever stack: A framework for knowing which levers to pull
RevGenius, Webinar: Precision GTM: The RevOps Lever Stack (Oct. 21, 2025)
TLDR
RevOps leaders have dozens of potential optimization levers. The key is knowing which ones will drive the most impact for your specific situation.
The "lever stack" framework organizes initiatives into infrastructure, efficiency, and growth layers to prioritize correctly.
Most teams jump to growth levers before fixing infrastructure problems, which leads to scaling broken processes.
The tyranny of too many options
The biggest challenge in RevOps isn't finding things to improve. It's choosing which improvements will actually move the needle.
Every vendor pitch, every conference talk, and every LinkedIn post suggests another "must-have" initiative. Build a data warehouse. Implement an AI agent. Overhaul your ICP. Launch a new enablement program.
How do you decide where to focus?
Introducing the lever stack framework
Think of RevOps initiatives in three layers:
Infrastructure levers sit at the foundation. These are things like:
Data quality and governance
CRM architecture and field structure
Integration stability
Basic reporting accuracy
Efficiency levers build on solid infrastructure:
Process automation and workflow optimization
Sales enablement and training effectiveness
Pipeline management and velocity improvements
Forecasting accuracy
Growth levers sit at the top:
Expansion into new markets or segments
Launch of new products or motions (like PLG)
Advanced personalization and ABM
Predictive analytics and AI-driven insights
Why sequence matters
Here's the mistake most teams make: They jump directly to growth levers because they sound exciting and promise big results.
"Let's implement AI-driven lead scoring!" But your CRM data is messy, so the AI learns from garbage and produces garbage predictions.
"Let's launch an ABM program!" But your attribution tracking doesn't work, so you can't measure if ABM is actually working.
"Let's add product-led growth!" But your customer data isn't integrated between your product and CRM, so you can't see the full journey.
You end up scaling broken processes or building sophisticated programs on shaky foundations.
Start with infrastructure, even when it's boring
Infrastructure work doesn't win awards. Nobody writes LinkedIn posts celebrating "finally got our lead source tracking accurate." But it's essential.
If your CRM fields are inconsistent, your integrations are flaky, or your reporting requires manual data cleanup, fix those first. Everything else depends on them.
Efficiency before growth
Once your infrastructure is solid, focus on efficiency levers. Make your current processes faster, smoother, and more reliable before trying to scale into new areas.
Can your sales team close deals efficiently with your current tools and processes? Fix that before expanding into new segments.
Is your pipeline velocity consistent and predictable? Optimize that before launching experimental new programs.
Can your team forecast accurately within a reasonable margin of error? Get there before building complex predictive models.
Growth levers amplify what's already working
Only after your infrastructure is stable and your core processes are efficient should you pull growth levers.
Why? Because growth levers amplify. If you amplify a broken process, you just break things faster at larger scale. If you amplify an efficient, well-instrumented process, you get compounding returns.
How to diagnose which layer you're in
Ask yourself:
Do I trust my CRM data? If no, you need infrastructure work.
Can I forecast accurately within 10%? If no, you need efficiency work.
Are my core processes running smoothly? If no, you need efficiency work.
Am I ready to expand or add complexity? If yes to all above, consider growth levers.
Most teams discover they need to fix infrastructure even though they want to work on growth.
Disclaimer
The RevOps Leader summarizes and comments on publicly available podcasts for educational and informational purposes only. It is not legal, financial, or investment advice; please consult qualified professionals before acting. We attribute brands and podcast titles only to identify the source; such nominative use is consistent with trademark fair-use principles. Limited quotations and references are used for commentary and news reporting under U.S. fair-use doctrine.