Every few weeks, a client forwards us a pitch deck promising to migrate their CMS in half the time, at a third of the cost, using "specialized AI agents." We've started keeping a folder. The names change but the promises don't. Express Lane. HOV Lane. Warp Speed. Rapid Path. Super-duper-fast! AI-Accelerated Migration.
The numbers are always specific enough to sound credible and round enough to be invented.
Here is what we've learned after watching several of these projects fail and inheriting the cleanup work: the vendors selling agentic CMS migrations are selling a story their own engineers don't believe. The math doesn't work. The technology isn't there. And the customers who buy it usually end up paying twice.
The Pitch That Should Make You Suspicious
A common scenario looks like this. A mid-market enterprise gets quoted $600,000 to migrate from Sitecore XP to SitecoreAI by an established agency. Real scope, real timeline, real engineering hours. Then a newer agency walks in with a deck full of agent diagrams and quotes the same project at $180,000 in half the time. The promise rests on phrases that have started showing up in nearly every pitch we see:
- "AI-specialized migration agents"
- "Autonomous code conversion"
- "Agentic refactoring pipelines"
- "Human in the loop oversight"
That last one is the tell. "Human in the loop" is the phrase AI generates for marketing teams who need to sound responsible without committing to anything specific. That's the out.
When a vendor leads with it, they're usually describing a workflow where one engineer, often junior, is supposed to review thousands of agent-produced changes they didn't write and don't fully understand.
In practice, that review doesn't happen. It can't happen. The volume is wrong.
What the Research Actually Shows
The productivity numbers vendors quote in these pitches don't match any credible research. They're closer to the numbers vendors made up in 2024, before anyone was measuring carefully.
The 2025 DORA report, based on Google's developer research, found AI adoption among software development professionals reached 90%, and most developers report productivity benefits. That sounds promising until you look at what those benefits actually measure. Research from DX, presented at the Pragmatic Summit and based on data from 121,000 developers across more than 450 companies, found that even with 92.6% of developers using AI coding assistants at least monthly and roughly a quarter of production code being AI-written, organizational productivity gains have not moved past 10%.
That's the real number.
10%. That's the responsible number. That's what the companies actually building these tools are seeing internally.
The picture gets worse when you look at experienced developers working on systems they know well. METR's randomized controlled trial of senior open-source developers found that when developers used AI tools, they took 19% longer than without them, even though they believed they were 20% faster. The 39-point gap between perceived and actual productivity is the most uncomfortable finding in the literature, and it has not been refuted.
A separate analysis of more than 10,000 developers across 1,255 teams found that high AI adoption produced more pull requests and larger pull requests, but review times rose 91%, bugs increased 9%, and overall delivery throughput stayed flat. The coding step got faster. The review step, which is where migration risk actually lives, got worse.
So when an agency promises 60% faster migrations or 75% cost reduction through agentic AI, ask them where that number came from. They cannot point to a study. They cannot point to a benchmark. They can point to a slide.
What's Actually Happening Inside These Projects
We've talked to engineers at three different agencies running these so-called agentic migration practices. The inside view is consistent and unflattering.
What's marketed as a "team of specialized AI agents" is usually one or two developers with Cursor Pro accounts. What's marketed as "autonomous schema migration" is a script that generates conversion code which a human then has to fix. What's marketed as "intelligent content preservation" is a series of prompts that produce plausible-looking output that nobody validates against the source until QA, by which point the timeline has already slipped. What's sold as "Agentic Content Mapping" is the AI is prompted to map Header to Header and Body to Body and Link to Link and if it can't, it makes a good guess.
How's that for a case study slide. "80% more Good Guesses!"
One engineer we spoke with described their company's flagship "agentic migration platform with 100 Specialized Agents" as "an intern with a Cursor Pro account and a doc full of 100+ prompts they found in a public GitHub repo". That's the actual delivery model behind the marketing.
The patterns we see when we inherit these projects are remarkably consistent:
Lost business logic. Custom personalization rules, workflow conditions, and component-level logic get flattened into generic templates because the agent doesn't understand what it was looking at. The content renders. The behavior is gone.
Sloppy, unmaintainable code. Agent-generated code optimizes for passing immediate tests, not for the patterns your team uses or the conventions your platform expects. Six months in, no one can extend it without rewriting it. Veracode tested AI-generated code across 80 coding tasks and found 45% of it introduced OWASP Top 10 vulnerabilities. That's not a maintenance problem. That's a security incident waiting for a calendar invite.
Content drift. Long-form content, especially anything with embedded references or structured fields, gets subtly rewritten. Not enough to fail validation. Enough to fail your legal review six weeks after launch.
Untested edge cases. Migration test environments rarely capture production data patterns, configurations, unusual combinations, recovery scenarios, edge cases, and timing mismatches, which often remain untested until they escalate after deployment. Agentic pipelines accelerate the volume of code without accelerating the test coverage required to validate it.
Rollback theater. Every pitch deck shows a rollback strategy. Almost none of them actually work in production, because the rollback was generated by the same pipeline that generated the migration.
Which Platforms Attract the Most Snake Oil
Some platforms get hit harder than others. The pattern correlates with two things: codebase complexity and customer panic.
Sitecore XP migrations to SitecoreAI are currently the biggest target. The migration is genuinely difficult and costly. Customers are facing real deadlines. They're expensive projects, which means a vendor offering 70% off looks like a lifeline. Almost every "agentic Sitecore migration" pitch we've reviewed underestimates the scope of custom pipelines and event processors, custom rendering logic, rendering parameters and Helix module dependencies by a factor of three or more.
Adobe Experience Manager migrations, particularly off older AEM 6.x environments to AEM as a Cloud Service, attract similar pitches. The component model, the OSGi configurations, and the workflow customizations don't translate cleanly to anything an agent can reason about without deep human context.
Drupal 7 to Drupal 10 (or off Drupal entirely) has become a magnet for agentic migration pitches because the volume of legacy Drupal sites is enormous and the upgrade path is well-known to be painful. Vendors promise to "AI-convert" custom modules, hooks, and theme logic. They cannot.
Optimizely CMS to SaaS Core migrations are starting to see the same pattern. The pitch usually elides the difference between content migration, which agents can sometimes assist with, and code migration, which they cannot reliably do.
The underlying issue isn't the platform. It's that any migration involving custom logic, custom content models, custom integrations, or any meaningful production history will defeat current agent capabilities. Enterprise migrations involve undocumented rate limits, brittle middleware, 200-field dropdowns, and duplicate logic. Giving an agent access to that environment is like giving a new hire server room keys without documentation. Something will break.
The Guided Vendor Tour
Here is our standard recommendation when a client is evaluating one of these vendors: ask to talk to three real customers. Not curated references. Not video testimonials. Three live customers, on a call, with the vendor not present.
What you will discover, almost without fail, is that the customers who appear in the marketing are the customers who got the most senior engineering attention. They were the proof of concept. The promised platform delivery model never matched what they actually received. Subsequent customers got the actual delivery model: a smaller team, less senior, running the same prompts.
Some vendors will give you a guided tour of their platform that feels weirdly choreographed. The dashboard always shows the same project. The "live" agent demos always run the same example. You're being shown plants. The version of the product that exists for your project does not exist yet, or exists as a folder of prompts on someone's laptop.
If a vendor refuses to put you on a call with three current customers, that's your answer.
If they only offer references from projects under NDA, that's your answer.
If their case studies don't include actual project metrics like team size, timeline, defect counts, or post-launch maintenance hours, that's your answer.
If they flat out say "we haven't used this in production for a customer yet", that's your answer. Oh wait, that last one is true. In fact they floated it as "You're client-0" as if that's a good thing?
What's Actually Real, and Where It Helps
None of this means AI is useless in migrations. It means the marketing is detached from reality.
Where AI actually helps in migration work, based on what we've seen and what the broader research supports:
Discovery and inventory. Auditing thousands of templates, components, content types, and workflow rules to produce a structured inventory. This is real, useful work. It saves time. It does not replace architecture decisions.
Code Audit. Writing code auditing scripts to look for certain known pitfalls or dependencies or patterns for exploratory use. We use this all the time to help with our project estimations. It helps with migrations because we know where all the skeletons are hiding, but it just helps with being more honest up front with hidden costs.
Documentation generation. Producing first-draft documentation of legacy systems that engineers can correct. Faster than starting from scratch.
Boilerplate conversion. Generating skeleton code for component shells, content type definitions, and routine API integrations. The engineer still has to verify, test, and integrate the output.
Test scaffold generation. Producing a first pass at test coverage that engineers refine.
These are productivity improvements measured in single-digit and low-double-digit percentages, exactly where the credible research has landed. They do not turn a $600,000 migration into a $180,000 migration. They turn it into a $540,000 migration that finishes a few weeks earlier with better documentation. That's a real benefit. It is not the benefit being sold.
How to Evaluate a Migration Vendor Right Now
A short list of questions that will end most snake-oil pitches inside fifteen minutes:
- What percentage of the migration code is generated by AI versus written by engineers? If they say more than 30%, ask to see the most recent project's commit history.
- Who reviews the AI-generated code, and how long do they spend per pull request? Compare the answer to the volume of PRs.
- Can we speak to three current customers without you on the call?
- What's your defect rate per thousand lines of migrated code, measured at 30, 60, and 90 days post-launch?
- What's the actual size of the team assigned to our project? Names, roles, seniority levels.
- How many of your "AI agents" are distinct systems versus the same LLM behind different prompts?
- Show us the rollback procedure for a single component. Not slides. The actual procedure.
A vendor who can answer all seven without flinching is probably worth talking to. A vendor who can't answer any of them is selling you a deck.
Practical Takeaways
The honest version of AI in CMS migrations looks like this: a 5 to 10% productivity gain on the parts of the work that are well-suited to it, applied carefully by experienced engineers who treat AI output as a draft and not an answer. That's the responsible number. It's the number Google and Microsoft are seeing internally on much larger investments than any agency is making. Anyone promising more is either inexperienced, dishonest, or both.
If your shortlist includes a vendor pitching agentic migrations, autonomous refactoring, or warp-speed delivery, do the due diligence. Talk to real customers. Ask for real metrics. Read the actual contract. Look at who will actually be doing the work.
And if you've already bought the pitch and the project is in trouble, you're not alone. We've taken on three rescue engagements in the last year that started as "agentic migrations" and ended with full rebuilds. The good news is that the rebuilds finish. The better news is that nobody at the second agency uses the word "agentic."




