DXPArtificial IntelligenceThought Leadership

Every DXP Has AI Now. Most of Them Are Getting It Wrong.

Every DXP vendor is racing to ship AI features. The ones building walled gardens are solving the wrong problem. The winners will embrace Composable AI and leave the door open for whatever comes next.

10 min read
DXP AI Horserace

The AI arms race in digital experience platforms has reached a fever pitch, and it’s creating an impossible dilemma for every platform buyer in the market. Every vendor is shipping AI features as fast as they can. Some are shipping them too fast, some are shipping the wrong things entirely, and a few are already being left behind by the very innovations they rushed to market six months ago. Welcome to the “damned if you do, damned if you don’t” era of DXP selection.

The Great AI Scramble

If you’ve attended a DXP vendor demo in the last year, you’ve heard the pitch. AI-powered content generation. Agentic workflows. Intelligent personalization. Every platform, from SitecoreAI’s Agentic Studio to Contentstack’s Agent OS, from Optimizely’s Opal to Adobe’s Sensei-powered agents, is racing to bolt AI capabilities into their core product.

And they should be. Forrester’s latest DXP analysis argues the market has reached capability parity across most traditional features: content management, personalization, analytics, and multi-channel delivery all look roughly the same at the enterprise tier. AI is where the differentiation battle has moved, and nobody wants to be the vendor that missed the moment.

But the speed of this race is creating real problems for real teams making real purchasing decisions.

The Spectrum of AI Approaches (And Why Most Are Risky)

Looking across the 36 platforms evaluated in the DXP Scorecard, the AI landscape breaks down into a few distinct camps, each with its own set of risks.

The Deep Integrators: All In, Locked In

SitecoreAI represents the most aggressive pivot in the space. When Sitecore rebranded XM Cloud as SitecoreAI in November 2025, they didn’t just sprinkle some AI on top. They rebuilt the platform narrative around Agentic Studio, a workspace where marketers can deploy 20+ prebuilt AI agents covering everything from SEO research to bulk content generation to campaign orchestration. The March 2026 release added multi-agent chaining, shared context across workflows, and the ability to run agents directly on CMS content items.

It’s impressive engineering. It’s also a bet that Sitecore’s internal AI implementation will remain competitive against a landscape where the underlying models and patterns are evolving every few weeks.

Contentstack went even further with Agent OS, which they launched in September 2025 with the deliberately provocative tagline “Content Management is Dead.” Agent OS positions itself as an operating system for autonomous AI agents, complete with Knowledge Vaults for brand grounding, an Automation Hub that connects agents across your entire stack, and the Polaris co-pilot embedded throughout the platform.

Adobe, characteristically, took the integrated suite approach. Their AEM agents span content discovery, optimization, and production, all tightly woven into the Adobe ecosystem. Their Content AI layer creates an intelligent content pool from your existing AEM assets. But if you want those capabilities, you’re running them on Adobe’s infrastructure, within Adobe’s ecosystem, using Adobe’s models.

The AI Layer Players: Smart, But Still Opinionated

Optimizely made perhaps the most interesting strategic move by releasing Opal as a standalone AI layer. Opal isn’t just for Optimizely customers anymore. It can be layered onto Sitecore, Adobe, Contentful, Drupal, WordPress, or any custom headless stack. That’s a genuinely composable approach to AI, and it reflects an understanding that most organizations aren’t ripping out their CMS anytime soon. They want to augment, not replace.

But even Opal carries an implicit lock-in. Once your content workflows, brand context, experimentation data, and AI-assisted processes are deeply embedded in Opal’s intelligence layer, migrating away from it becomes its own kind of platform dependency.

The Cautious Middle: AI Assist, Not AI Overhaul

Platforms like Sanity, Contentful, Hygraph, and Storyblok have taken a measured approach. Sanity offers AI Assist for in-editor content generation, an MCP server for connecting AI agents directly to the Content Lake, and Agent Context for production-facing bots. Contentful ships AI Actions for localization and content variant creation alongside AI Suggestions for personalization. Hygraph launched AI Agents for automated content workflows and an MCP Server for external tool integration.

These platforms score in the 42-to-62 range on AI content generation in the DXP Scorecard, compared to the 70-72 range for SitecoreAI and Contentstack. They’re behind on native AI depth. But they may be ahead on something more important: architectural flexibility.

The Laggards: Not Even in the Conversation

And then there are platforms still scoring in the teens and twenties on AI capabilities. Sitecore XP (the legacy on-prem version), Joomla, Umbraco, and older versions of Magnolia haven’t meaningfully entered the AI race. For organizations stuck on these platforms, the AI gap isn’t a feature comparison. It’s a strategic crisis.

The Real Problem: AI Is Moving Faster Than Platforms Can

Here’s what keeps me up at night as an engineer who has watched platform cycles for two decades.

The AI landscape is shifting on a timeline measured in weeks, not years. OpenAI, Anthropic, Google, and a growing roster of open-source projects are releasing new models, new capabilities, and entirely new paradigms at a pace that makes traditional software release cycles look glacial.

Consider what’s happened just since most DXPs shipped their current AI features. The Model Context Protocol (MCP), introduced by Anthropic in late 2024, has gone from experimental concept to production standard adopted by every major AI provider. As of early 2026, there are over 1,600 MCP servers available, with support from Claude, ChatGPT, VS Code, Cursor, and dozens more clients. Google’s A2A protocol for agent-to-agent communication is gaining traction. The entire landscape of how AI systems connect to external tools has been rewritten in about 18 months.

Any DXP that built its AI features around a specific model, a specific integration pattern, or a specific set of assumptions about how AI agents work is already at risk of being outdated. The platforms that went deepest on proprietary AI implementations may find themselves maintaining expensive legacy AI infrastructure while the rest of the industry has moved on to something fundamentally different.

The Governance Gap Nobody Talks About

There’s another dimension to this problem that gets far less attention than the feature race: AI governance.

Looking at the DXP Scorecard data, the scores tell a stark story. Even platforms with strong AI content generation (scoring 60-72) often score 30 or below on AI governance and trust. Sanity scores 22 on governance. Storyblok scores 20. Contentful scores 30. Kontent.ai scores 30.

The platforms that score best on governance are, predictably, the enterprise incumbents. Adobe leads at 58, followed by Salesforce at 55 and SitecoreAI at 52. But even those numbers reflect a category still figuring out what AI governance actually means in a content operations context.

Where are the audit trails for AI-generated content? Where is the hallucination detection? Where are the brand voice enforcement engines that actually work at scale? Where is the IP indemnification? Most platforms are racing to ship AI content generation while governance infrastructure lags behind by a full product cycle.

This matters because enterprise buyers don’t just need AI to be powerful. They need it to be auditable, explainable, and safe. The gap between AI capability and AI governance is the sleeper risk in every DXP procurement happening right now.

The Composable AI Thesis: Why Openness Wins

So where does this leave organizations trying to make a platform decision?

I believe the platforms that will ultimately win the AI era aren’t the ones with the most impressive native AI features today. They’re the ones that embrace what I’d call “Composable AI,” an architectural approach that treats AI integration the same way composable DXPs treat every other capability: as a modular, interchangeable, best-of-breed decision.

Composable AI means three things in practice.

First, bring your own model. The platform should not lock you into a specific LLM provider. Today you might want Claude for content generation, GPT-4 for classification, and an open-source model for internal summarization. Next quarter, you might want something entirely different. The platform should be a conduit, not a gatekeeper.

Second, embrace open integration protocols. MCP has emerged as the standard for connecting AI agents to external systems. Platforms that expose their content, schema, and operations through MCP (as Sanity, Hygraph, and others already do) position themselves as nodes in a broader AI ecosystem rather than isolated silos. SitecoreAI’s Agentic Studio supports MCP connections. Optimizely is building MCP server support into their CMS roadmap. This is the direction the entire industry needs to move.

Third, separate AI orchestration from content infrastructure. Your CMS should be excellent at structured content management, API delivery, and editorial workflow. AI should be a capability that plugs into that infrastructure through well-defined interfaces, not a proprietary feature baked into the content layer itself. When the AI landscape shifts (and it will shift), you should be able to swap your AI strategy without migrating your content platform.

The Practical Framework

If you’re evaluating DXPs right now, here’s how I’d frame the AI question.

Don’t pick a platform because of its AI features. Pick a platform despite its AI features. Choose the foundation that gives you the best content modeling, the best API design, the best developer experience, and the best editorial workflow for your team. Then evaluate how open that platform is to external AI integration.

Ask vendors these questions: Can I connect my own LLM to your platform through a standard protocol? Can I build a RAG pipeline against your content API without going through your proprietary AI layer? Can I replace your built-in AI features with a third-party alternative if something better comes along? If the answer to any of these is “no,” you’re not buying a platform. You’re buying a bet on that vendor’s AI roadmap, and in 2026, that’s a bet with very short odds.

The platforms that score highest on API design quality, extensibility, and integration ecosystem in the DXP Scorecard are, not coincidentally, the platforms best positioned for a Composable AI future. Sanity scores 88 on API design. Contentful scores 82. Contentstack scores 78. These platforms may not have the flashiest AI demos today, but they have the architectural bones to support whatever AI looks like tomorrow.

The Bottom Line

The AI race in the DXP market is real, and the pressure to show AI capability is understandable. But the vendors who are rushing to build the most impressive walled gardens of AI functionality are solving the wrong problem. They’re optimizing for today’s demo while creating tomorrow’s lock-in.

The AI landscape is moving too fast for any single vendor to pick a winning horse. The winning strategy is to not pick a horse at all. Build on platforms that are open enough to let you ride whichever horse is fastest at any given moment. Composable AI isn’t just a nice-to-have architectural principle. It’s the only rational response to a technology landscape that reinvents itself every few months.

The DXPs that figure this out will be the ones still standing when the dust settles. The ones that don’t will join the long list of platforms that bet everything on the technology of the moment and got left behind when the moment passed.

Scores referenced in this article are sourced from dxpscorecard.com, a continuously updated platform evaluation tool covering 160+ criteria across 36 platforms. Scores reflect platform capabilities as of early April 2026 and are subject to change as platforms evolve.

Erika Halberg
Erika Halberg

Director of Technology and Platform Lead

HT Blue