"AI SDRs Don't Work" —From the Guy Running the Company That Helped Create the Category
11x <> The Signal
If you were forwarded this newsletter, join 8,128 weekly readers—some of the smartest GTM founders and operators—by subscribing here:
Thanks for reading this hand-poured, 100% organic, farm-to-table (human-written) newsletter.
“AI SDRs don’t work.”
That’s what Prabhav Jain (CEO of 11x, one of the most well-funded companies in the AI SDR space, backed by a16z and Benchmark) told me. The company that helped create the category is telling you the category label is wrong.
I wanted to talk to Prabhav because I’ve been living in this world for a long time. I co-founded one of the first AI SDR companies (Hexa.ai) back in 2016. We positioned ourselves as “replacing SDRs with AI.” Which is funny because 1) AI was barely a thing back then, and 2) I think “AI SDR” is bad positioning. It got acquired by OutboundWorks in 2018, which was autonomously booking meetings on behalf of AEs. Years later, I spent time at Apollo running outbound (internally titled “AutoSDR”) for 20 AEs and no SDRs. So I’ve been building, buying, and deploying this stuff from multiple angles for over a decade.
And I’ve had some version of the same conversation about 50 times in the last six months. A CRO or VP of Sales tells me they’ve been told to “figure out AI.” Their CEO or board wants them to triple pipeline without adding headcount.
So they start taking calls from AI SDR vendors. Every vendor says the same thing. Every demo looks great. And then three months into a contract, half of them are wondering what happened.
The problem isn’t AI. The problem is how teams are deploying it.
Prabhav is the CEO of 11x. Before 11x, he was Head of Engineering at Brex, has founded two companies, and invested in another 50. He’s a builder. And the conversation reflected that.
This post is a sponsored deep dive distilling that conversation, combined with patterns I’m seeing across my own experience over the last 18 months.
In this post, we cover:
Why the “AI SDR” label is the problem
The question every CRO should answer before buying anything
The bake-off trap (and how to avoid it)
How to practically deploy AI agents
Your data is terrible, and agents can actually fix it
The inbound gap nobody is talking about
A peek into the future
What the best teams look like in 12 months
Alright, let’s get into it.
Why the “AI SDR” label is the problem
The term “AI SDR” sets the wrong expectations from day one. People hear it and immediately assume they’re going to plug this in, replace a headcount, and start getting meetings tomorrow.
That’s not how any of this works. And Prabhav would know. His company helped define the space.
Think about your best sales rep. Were they your best performer in week one? Of course not. They ramped. They learned your ICP, your messaging, how to handle common objections, etc. They experimented. They failed. They got better.
AI is the same. The difference is that once it ramps, it can run 11 experiments simultaneously instead of one. It can test messaging variants, channel sequences, timing, and personalization at a scale no human team can match. The compounding effect is real. But it compounds from a starting point, and that starting point requires patience and clear thinking.
The companies getting results aren’t the ones that plugged in a tool and walked away. They’re the ones who treated AI deployment like onboarding a new team member. They were willing to experiment and didn’t expect success from every iteration. Dedicated time, clear goals, and a willingness to iterate daily, especially in the early days of an AI deployment.
The question every CRO should answer before buying anything
Before you evaluate a single vendor, answer this question. What is the specific bottleneck in my pipeline that, if removed, would produce more revenue?
If you can’t answer that in one sentence, you’re not ready to buy.
For some teams, the answer is obvious. “We aren’t getting enough meetings booked from cold outreach.” Okay, that’s a top-of-funnel automation play. For others, it’s: “We have plenty of inbound leads, but our speed-to-lead is terrible, and half of them go cold.” That’s a completely different problem with a completely different solution.
Without that diagnosis, every AI vendor looks like a fit. Because every AI vendor says they do everything.
Prabhav told me the most sophisticated buyers he works with at Fortune 1000 companies show up to the first call with an IT admin, a data scientist, a security person, and a CIO-type. They’ve already mapped the problem. They know exactly what they want automated and what they want to keep in-house. It’s a completely different buying motion than what most mid-market companies are doing, which is more like: “AI seems important, let’s try something.”
The best framework is to work backwards from the metric you’re trying to move. Meetings booked? That’s outbound automation. Conversion rate on inbound? That’s speed-to-lead and qualification. Pipeline quality? That’s better lead scoring and research before anyone reaches out.
AI is a tool. Like any tool, it works when applied to a specific problem. It fails when applied to a vague ambition.
The bake-off trap (and how to avoid it)
I see this mistake constantly. A team decides to “test AI,” so they sign up for three vendors simultaneously, give each one a slice of their territory, and run a 90-day bake-off.
Sounds rigorous. Almost always fails.
Why? Because nobody owns it. The SDR leader is still running their team and is often threatened, so they want the vendor to fail. The ops person is juggling five other projects. Nobody has dedicated time to configure each tool, monitor results, give feedback, and iterate on the messaging. Each vendor gets a half-hearted deployment, produces half-hearted results, there is little or no iteration, and the team concludes that “AI doesn’t work for us.”
The fix is simple but requires discipline. Pick one vendor. Assign one internal owner who has real time to dedicate (not 10% of someone’s plate). Define the KPIs before you start. Make the vendor demo actual workflows live, not slide decks (if they won’t show you the platform, that’s a red flag). And commit to 90 days of daily iteration before judging results.
The companies Prabhav described as their most successful customers all had one thing in common. They had a dedicated internal owner who treated the deployment like a real project (or a new, full-time hire), not a side experiment.
How to practically deploy AI agents
There’s a false binary in the market right now. People talk about it like you have two choices: buy an off-the-shelf AI SDR product, or build your own agents in-house.
The reality is that most companies end up doing both, and the split depends on your size and sophistication. It’s not “Build or Buy,” it’s “Build and Buy.”
For example, a company may want to keep lead scoring/prioritization and their knowledge base in-house. These are critical components of a GTM machine.
Then the question becomes: where do you want to build+deploy AI agents yourself?
11x has a unique take on this: what if you rebuilt the orchestration layer from scratch, knowing what agents can do today? They want to build the best possible versions of each building block (lead finding, enrichment, scoring, research, message writing, CRM integration, phone, email, chat, SMS) and then let you compose them into whatever workflow you need. Think of it as providing the infrastructure so you can say “anytime someone from my target list visits my website and spends 30 seconds on the pricing page, send me a Slack notification and trigger a multi-channel sequence.” You bring the ICP logic and the knowledge. Their platform provides the execution.
(I’ve written about this tension before in The $100B+ Winner in GTM Tech Will Sell Labor, Not Software. The thesis is the same: winners in GTM tech are going to sell outcomes, not seats.)
Your data is terrible, and agents can actually fix it
This is one of the most underrated dynamics I’ve seen.
I talk to revenue leaders regularly who are scared to let AI agents touch their CRM. And honestly? They should be. The data in most CRMs is terrible. Duplicate records, outdated titles, wrong email addresses, and contacts who left the company two years ago. If you unleash an agent on bad data, you get bad outreach at scale. Which is worse than no outreach at all.
Most people don’t expect this, but agents can actually fix the data they run on.
When an AI agent runs an outreach campaign, it generates data exhaust. Bounced emails tell you the contact is outdated. Response data tells you which titles and personas actually engage. CRM activity logs get updated automatically because the agent is doing the work, not a human who forgets to log it. Over time, the data gets cleaner, the targeting gets sharper, and the results compound.
It’s a positive feedback loop. The agent cleans the data it operates on, which makes the agent more effective, which generates more exhaust, which cleans more data.
This doesn’t happen on day one. Which goes back to the point about iteration. The teams that give AI 90 days to ramp and iterate see this flywheel kick in. The teams that judge results after two weeks never get there.
The inbound gap nobody is talking about
Everyone’s focused on AI SDRs in outbound, which makes sense. Outbound is where the category started, and it’s where most of the marketing dollars go. It’s the easiest to quantify.
But for companies with existing inbound volume, the fastest path to more pipeline is converting what’s already coming through the door.
Think about what actually happens at most companies. A prospect visits your website. Maybe they engage with a chatbot. Maybe they don’t. Maybe they fill out a “Contact Sales” form. What happens next?
In most companies, an SDR checks their queue a few hours later and sends an email. By then, the prospect has already moved on. They’ve visited three other websites. They’ve forgotten why they filled out that form.
The data on speed-to-lead has been around for over a decade. Responding in the first five minutes gives you roughly an 8x higher chance of qualifying a lead compared to responding in 30 minutes. And yet the average B2B response time is still measured in hours.
What Prabhav walked me through on our call was a different approach entirely. When he thinks about inbound, he doesn’t start with “we have a chatbot.” He starts with: “If your sole job was to convert every person visiting your website into pipeline, what would you actually have to do?”
The answer is you’d have to be multi-channel by default. A prospect fills out a form. You call them in under 20 seconds (that’s what 11x’s Julian does). They don’t pick up? Call again. Still nothing? Text them. They respond over text? Qualify them right there. They don’t respond? Email them. The system adapts to whatever channel the prospect prefers.
That’s fundamentally different from “we have a chatbot on our website.” And it’s the kind of orchestration that’s nearly impossible to duct-tape together with a bunch of API calls.
I think inbound orchestration is the most obvious, highest-ROI AI use case in GTM right now, and it’s barely being discussed. If you have inbound volume and you’re not responding in seconds across multiple channels, you’re leaving pipeline on the table every single day.
A peek into the future
Right now, agents are very good at execution. You have an idea, and the agent can go out and make that campaign happen.
What if the agents were a strategist that you could partner with? Agents that actively help you define campaigns based on what’s working and what’s not, the previous activities you’ve had for your key target accounts, ingesting all that data to help you prioritize and run the different plays that actually result in more pipeline and more revenue. Agents that stop you from launching a weak campaign.
This unlock, in terms of the quality of models and type of reasoning they can do, was only really made possible this year, and that future is coming very shortly.
What the best teams look like in 12 months
Many of the best GTM teams a year from now won’t look like today’s teams with AI bolted on. They’ll be structured differently.
AI handles the initial research, the multi-channel outreach, the follow-up sequences, and the inbound qualification. Humans handle closing, relationships, and the complex deals that require judgment, empathy, and creative problem-solving. The stuff that makes sales a craft, not a coin-operated machine.
This matters because the companies that try to fully replace their teams with AI tend to end up worse off than where they started. The technology isn’t there yet for full autonomy on complex sales cycles. But it’s absolutely there for handling the 70% of an SDR’s day that isn’t actually spending time with prospects. The research, the data entry, the follow-up scheduling, and the manual outreach across channels. Especially in lower ACV segments (as outlined in the 2x2 matrix I shared here: “Should You Buy an AI SDR?”).
The gap between teams that deploy AI thoughtfully and teams that are still “evaluating options” is compounding every week. If you’re a CRO, GTM leader, or founder reading this, start with the diagnosis. What’s your pipeline bottleneck? Answer that, and the rest of the decisions get a lot easier.
Reach out to 11x if you want help deploying digital workers, and let them know The Signal sent you.
Thank you for your continued attention and trust. I do not take it for granted.
See you next time,
Brendan 🫡




