In March 2025, Arjun Pillai took the stage at B2BMX for a 30-minute talk titled "The Death of the MQL."
Over 90 practitioners joined us, the title said something people had been thinking for years but hadn't said out loud.Afterward, practitioners came up one by one, not to debate the premise, but to describe their own version of the same problem.
The thesis that generated all of it: "The MQL isn't dead because AI killed it. It's dead because buyers evolved and MQLs didn't." — Arjun Pillai, B2BMX 2025
Of all the traffic hitting a B2B website, 1.5% fills out a form. This blog is about the 5.5% who didn't and what fifteen years of fixes failed to do about it.
The Original Problem MQL Actually Solved
I want to give MQL its due credit, when marketing automation platforms launched in the mid-2000s, they solved a real problem: you couldn't talk to every visitor. You needed a way to separate signal from noise without staffing a call centre.
So the industry built a proxy. Behavioural signals (page views, email clicks, content downloads), combined with form data, (job title, company size, budget range) and produced a score. When a lead crossed a threshold, it went to sales.
Practical. Measurable. For its time, clever.
At this time, the buyer would wait. They would tolerate friction,they would trade their email for a white paper and trust that a rep would follow up with something useful.
But even in its prime, MQL had limits it never advertised. It told you a lead existed, it told you they had crossed a threshold, but it couldn't tell you what they were actually trying to solve, whether they had the authority to buy, or what would make them walk away. Those gaps were acceptable when the buyer would wait long enough for a rep to fill them in. When the buyer stopped waiting, the gaps became the whole problem.
15-Years Later & It is Still Not Fixed
What happened next is worth tracing carefully. Because each generation of fixes was smarter than the last and none of them worked.
Progressive profiling was the first attempt. Instead of one long form, spread the questions across multiple visits. Clever in theory,but in practice: same snapshot data, collected slower.
Behavioural scoring added sophistication. Weight the pricing page visit more than the blog post read, score the webinar attendance higher than the newsletter click. It produced better-ranked lists of the same signals, but the ceiling remained.
Chatbots were the most disruptive promise. Finally: conversation. Except most chatbots ran on decision trees. When a buyer went off-script,which real buyers do constantly, the bot had one response: "Let me connect you to someone who can help."
Arjun called it directly: chatbots were "glorified forms, hoodwinking people." A form with a friendlier UI.
Gating content was the cultural companion to all three, the belief that more downloads equaled more pipeline led, as one B2B marketer put it, to "chaos in the collaboration between sales and marketing." The 3% who filled out forms were optimised for the 97% who didn't remained invisible.
Intent data was the most sophisticated fix yet. Third-party signals identifying accounts researching your category, layered on top of existing MQL scoring. It genuinely improved account identification,but it still couldn't tell you what the buyer needed to know before they'd move forward. Account-level signals are aggregated and often anonymous, you can see that someone at a target account spent time on your pricing page. But you don't know what question they had, whether they found the answer, or why they left.
Lastly, ABM was the industry's attempt to fix the trust gap from a different angle. Instead of scoring individuals, warm up the whole account,get multiple stakeholders engaged, build account-level scores, and bring the buying committee into the picture earlier. It was a genuine step forward,yet it still couldn't complete the conversation. It could tell you an account was warm, but it couldn't tell you what the champion needed to hear to move forward.
The pattern across all five eras: every fix optimised for speed of contact capture. Not one asked: did we reduce uncertainty enough for this buyer to take the next step?
The Structural Reason Every Fix Hit the Same Ceiling
MQL was built on a proxy, meaning that the MQL system was designed to be used as a substitute indicator, to measure potential revenue rather than measuring revenue directly.
Behavioural signals were never direct evidence of buying intent, they were the best available approximation when direct evidence was impossible to collect at scale. Improving the proxy doesn't close the gap between the signal and the thing it's supposed to represent.A better thermometer doesn't cure the fever.
One practitioner on r/b2bmarketing reddit thread described their MQL threshold with surgical precision: "The MQL threshold you have set essentially indicates that a person has an email address and hasn't unsubscribed right away." In other words, you’re not identifying real buying interest. You’re just identifying people who exist in a database and tolerate one interaction.
Another practitioner diagnosed the same flaw from a different angle: "When someone is labeled as 'qualified,' it often just indicates they match a certain profile rather than being prepared to implement any changes. They may appear impressive on paper, but during discussions with the sales team, it becomes clear that they lack urgency or genuine commitment."
The downstream cost of this flaw shows up in three places. Arjun named it at B2BMX as the MQL tax: "The MQL tax isn't just about the leads you miss. It's about the inefficiency baked into every lead you do capture."
Here is what the tax looks like in practice. A marketer runs a content syndication campaign — puts a white paper together, gives it to a third-party publisher, gets back 500 leads. Name, email, phone number. They give the list to sales. The salesperson makes the call. The person on the other end says: "What white paper? I have no idea what you're talking about." The salesperson is done. Not frustrated but done. "All the leads are junk. I don't trust marketing at all." That single call, repeated across hundreds of reps and dozens of campaigns, is how marketing loses the trust of sales.
At lead drop-off: MQLs contain minimal discovery information, so SDRs re-qualify from scratch — wasting time on leads that should have been disqualified at the source. On the sales call: no context from the initial visit means cold discovery, frustrated buyers, and first meetings that start with "So, tell me about your business." In the organisation: sales stops trusting the queue. Leads sit. Opportunities disappear.
The most sophisticated version of the fix — closed-won analysis reverse-engineered into scoring models — still only traces buyers who filled out a form. The 5.5% who were ready but left without a conversation are invisible in the data. You cannot reverse-engineer what you never captured.
Why Teams Kept Running It Anyway
This is the part that requires honesty.
It wasn't ignorance. The people running MQL programs in 2024 and 2025 knew the conversion rates. They had seen the sales team ignore the queue. They had sat through enough QBRs to know the math didn't hold.
They kept running it because changing felt riskier than optimising.
Three forces kept MQL alive long after it stopped working.
Careers
CMOs and VP Demand Gen leaders built their reputations on MQL volume. Changing the metric mid-tenure means admitting the last several years were spent optimising the wrong thing. That conversation is hard to initiate voluntarily.
Finance
CFOs understand MQL math. They've built budget models around it. Switching requires re-educating finance, rebuilding dashboards, and redefining what "marketing contribution to pipeline" means. As one practitioner on r/b2bmarketing put it: *"Marketing often aims to increase lead volume, which leaves sales overwhelmed with unqualified MQLs. This erodes the trust between the two departments."*
The measurement stack
Attribution platforms, CRMs, marketing automation workflows — all built around MQL. Changing the metric means changing the infrastructure that reports on it.
The cultural damage compounds the operational one. The final consequence of the MQL trust breakdown wasn't passive — it was institutional. Sales didn't just stop working the queue. They built an entire parallel infrastructure to escape marketing entirely. Tools like Outreach and ZoomInfo took off because sales teams were the main buyers. The pitch was simple: "We don't want anything to do with marketing. We are just going to generate our own leads." A multi-billion-dollar category of outbound tooling exists largely because MQL destroyed the trust between two functions that were supposed to work together.
Once sales files marketing into the "noise" category and the data suggests this happens consistently, then every subsequent MQL starts with a credibility deficit. Sellers' perception of marketing is almost always binary: helpful or noise. When MQL volume keeps arriving without quality, the bin fills quickly. And leads that might have converted sit unworked. As one practitioner described the result: "The stage between MQL and SQL often becomes a wasteland."
What Actually Needed to Change
The proxy was never the problem. The assumption underneath it was.
Every fix in the 15-year timeline was what is now called Assisted Marketing — AI tools that made humans faster at the same broken motion. Better scoring, smarter chatbots, richer intent data, broader ABM coverage. Each one still required a human to act on the output. The buyer landed at 11pm. The tool was ready. The human wasn't. AQL isn't just a new metric. It's the output of a different operating model — one where the agent executes the qualification motion autonomously, without a human in the loop at each step.
MQL was built on the belief that qualification is something you measure — a score a lead earns through accumulated signals. The right framing: qualification is something you complete. It's a decision point, not a threshold.
A more useful definition: a lead is qualified when you have reduced uncertainty enough for the buyer to take the next step.
That's not a score. It's an outcome. And that outcome requires conversation — questions that adapt to what the buyer actually needs to know, follow-ups that handle edge cases, responses that surface objections before the buyer walks away carrying them.
Forms capture a snapshot. Chatbots promised conversation and delivered routing. Intent data identified the account. ABM warmed it up. The entire fifteen-year timeline optimised for contact capture or account coverage. The missing piece was always the conversation itself.
One practitioner who tried building a better readiness model put it plainly: "The sole true sign of readiness to buy is when they ask for a demonstration." Everything else is an approximation of that moment. Fifteen years of fixes made the approximation more sophisticated. None of them produced the moment itself.
The Qualification System That Addresses the Root Cause
The marketers have been asking for this — in different words — for years.
One SDR manager put it directly: "SDRs should have access to relevant context rather than just a name on a list. What led to the designation of an MQL? Which pages did the prospect explore? What issues were they investigating? The initial outreach should directly reference these points."
Another thread asked the question plainly: "Is anyone measuring buyer readiness instead of just MQLs?"
The answer is the Agent Qualified Lead (AQL).
An AQL isn't the output of a single tool or a single conversation. It is a standard — a definition of what "qualified" means in a world where agents are involved at every stage of the buying journey. Four outcomes have to be reached before a buyer is genuinely qualified to talk to sales:
- Use Case Clarity: The buyer understands whether the product solves their specific problem, not problems like theirs but their exact scenario
- Constraints Identified: Budget, timeline, tech stack, compliance needs, approval process — the real-world limitations that will determine whether a deal closes
- Objections Surfaced: The concerns the buyer has that, left unaddressed, will cause them to disengage
- Next Step Defined: The buyer knows what happens next and has explicitly asked for it
When three of four outcomes are complete and the buyer has raised their hand — "yes, let's book a call,"— that's an AQL.
The critical distinction: these four outcomes don't have to be completed by a single agent in a single session. Different agents contribute different outcomes across the buying journey. That's what makes AQL a category-level metric rather than a product feature.
Layer 1: the buyer's own agents
ChatGPT, Claude, Gemini. Before a buyer reaches your website, they have often already achieved Use Case Clarity through their own research. They asked an LLM which tools solve their problem. They got a shortlist. They may have started identifying Constraints — "does this work with Salesforce?" is a constraint question. By the time they land on your pricing page, one or two of the four outcomes may already be complete. Not because of anything you did but because their agent did the work.
Layer 2: your agents
The agentic experiences your brand controls across website, email, ads, and content. This is where your agent picks up wherever Layer 1 left off. If Use Case Clarity is already done, it doesn't re-explain your product. It starts at Constraints. It surfaces Objections that no static content would surface. It defines the Next Step with the buyer's explicit hand-raise.
The "skip the front-end demo" moment illustrates this precisely. A buyer arrived on a sales call having already interacted with Docket's AI Marketing Agent. Their first words: "Skip the front end — they've already seen it. Show me the back end."
Use Case Clarity was done before they ever reached the website — Layer 1 handled it. The agent completed the remaining outcomes. By the time a human got involved, the buyer was at 40 on the buying journey, not 10.
Another example: one company spent 22 minutes with Docket's agent before their first human call. When that call happened, they arrived with their entire buying committee — not because sales pushed for it, but because the conversation had already moved them far enough that the committee was the natural next step. The agent didn't just qualify an individual. It accelerated the organisational decision.
An MQL gives a rep a name, a score, and a list of pages visited. An AQL gives a rep a context card: which outcomes are complete, which agent completed them, what constraints were surfaced, what objections are live, and a buyer who explicitly asked to continue. As Arjun described it at B2BMX: "It's like receiving a dossier, not just a business card."
This is not a chatbot. A chatbot follows a script and hands off when the buyer goes off it. An AI agent reasons through the conversation — adapts, qualifies, handles the edge case, takes action. The difference isn't the interface. It's what happens when the buyer asks something the flowchart didn't anticipate.
In practice, teams deploying an AI Marketing Agent on their highest-intent pages see conversation start rates of 36% versus 13% on legacy form flows. Website conversion lifts of 40–60% are in the observed range. Docket's own pipeline shows AQLs converting approximately four times faster than traditional inbound leads.
AQL Is a Category Metric, Not a Product Output
Agent qualification is already happening, whether marketers choose to participate in it or not.
Arjun is direct about the implications: "Marketers are not moving to AQL because they want to. This is not something marketers control. Whether you buy Docket or not, it doesn't matter. ChatGPT is doing qualification for you." Layer 1 is running regardless.
The movement isn't being caused by Docket. Docket is naming something that was already in motion.
This reframes AQL from a product output to a category-level measurement question. The question shifts from "where did this lead come from?" to "where did this buyer get qualified, and how far along are they?" Those are different questions. The second one reflects how the buying journey actually works in 2026.
Your job isn't to own every agent touchpoint — that's not possible. Your job is to ensure your Layer 2 agents pick up from wherever Layer 1 left off, complete the remaining outcomes, and hand sales a buyer who is at 40, not 10.
AQL is a metric — an outcome — not a process. You don't have to rebuild your stack to start recognising AQLs when they appear. You start by seeing the buyer journey clearly.
Where to Start — Not a Cliff Edge, an On-Ramp
The math on what's recoverable is worth stating plainly before getting to the paths.
Of all the traffic hitting a typical B2B website, 1.5% converts via form. 93% are low-intent browsers — correct to ignore. The 5.5% in the middle arrived high-intent, with a specific question, ready to move forward. They left without a conversation. Not because they weren't qualified — because there was nothing to have a conversation with.
Teams running AQL on their highest-intent pages aren't generating more traffic. They're recovering that 5.5%. Conversation start rates of 36% versus 13% on legacy form flows. Website conversion lifts of 40–60% from the same traffic. The funnel doesn't need to get wider at the top. It needs to stop leaking in the middle.
Most teams don't need a cliff edge to get there. They need a starting point.
One thing to address directly: if you adopt AQL, your MQL volume will likely go down. Traffic to forms is declining as more buyers self-qualify before they reach you. That number will look bad on a dashboard built around MQL volume. But conversion goes up, in some cases three to five times. The metric that mattered was never the number of leads. It was the number of leads that became opportunities. AQL optimises for that.
One question enterprises ask before deploying an agent: what if it says something wrong? It is a fair question and it deserves a direct answer. The AI Marketing Agent doesn't answer from open-ended AI inference — it answers from your approved knowledge. You define what it can say, what it can't say, and when it escalates to a human. If a buyer asks about pricing, security compliance, or competitive positioning — topics where improvisation is dangerous — the agent answers from what you have approved, or it escalates in real time. Every answer is auditable. Every escalation is logged. The guardrails aren't a configuration step. They're the foundation the agent runs on.
Three paths, depending on where you are:
Keep MQL, improve scoring: Valid if you're not ready for change. You'll gain 2–3% conversion improvement. The ceiling is structural. This buys time, not transformation.
Run MQL and AQL in parallel: Deploy an AI agent on your pricing page — the highest-intent surface on your site, where visitors are already evaluating cost. Run it for 30 days. Track AQL-to-opportunity conversion against your MQL baseline. Let the data make the internal case. This is the pragmatic path and the right starting point for most teams.
Full transition: A 90-day migration where MQL is phased out and AQL becomes the primary inbound metric. Works for teams with executive mandate and urgency. Not a requirement to start.
- If you're a CMO running demand gen: the parallel-run option lets you recover that 5.5% — without touching your existing MQL dashboard or requiring a board conversation.
- If you're a CRO: the context card your reps receive from an AQL is what they've been asking marketing for since the first time they saw a name on a list.
- If you're in RevOps: the AQL framework doesn't require you to rebuild your attribution model on day one. It requires you to add one new data point — conversation outcome — to the model you already have.
One RevOps leader on r/b2bmarketing ran a manual version of this — a five-minute SDR pre-qual call added before every MQL handoff. The result: MQL volume fell 40%. MQL-to-opportunity conversion rose from 9% to 28% in six months. Pipeline velocity doubled. Sales stopped resenting marketing. They added a conversation layer by hand. AQL builds it into the system.
Demandbase automated 93% of their seller queries in under two weeks of deploying an AI Marketing Agent. Southwest Solutions now generates over 100 minutes of buyer interaction on their website every day — the equivalent of ten qualification meetings, running without a single human in the loop. The implementation timeline objection is almost always about the last platform someone tried — not this one.
You don't have to burn down your funnel to start. Deploy one agent on one high-intent page and see what comes back.
The Conclusion Worth Writing
The MQL didn't fail because the people who built it were wrong. It failed because the buyer it was designed for moved on — and the metric didn't.
Fifteen years of fixes made the proxy more sophisticated. None of them questioned whether a proxy was the right foundation.
The teams that move to AQL first won't just have better conversion rates. They'll have something more valuable: a qualification process their sales team actually trusts, because the filter is built on completed conversations — not inferred intent.
That's a different kind of pipeline. And it starts with a different kind of question — not "how many points did this lead earn?" but "did this buyer get what they needed to take the next step?"

