Blog

Five Problems Heads of AI Face Right Now

Kin Lane ·April 26, 2026
Table of contents

The Head of AI is becoming a real role. Not a strategy deck, not a re-titled VP of Data Science, but a distinct executive function: standardize and govern how MCP servers get produced and consumed, shepherd AI integration from idea through compliance to production, and pull cross-team and partner adoption into something coherent enough to defend in front of a board.

We track this role through our persona work at Naftiko — Laura, Head of AI — and the problems landing on her desk are specific, structural, and largely unsolved. Here are five of them.

1. MCP Sprawl Is Already Happening, and Nobody Owns It

Multiple teams inside the same enterprise are independently building MCP servers for the same third-party services. The CRM team wrote one for Salesforce. The customer success team wrote a different one. The sales analytics group is using a third one a vendor handed them. None of these teams know about each other, none of them coordinated on auth, scopes, or rate limits, and none of them are discoverable from one place.

By the time the Head of AI hears about it, the org has six overlapping MCP integrations to the same upstream — each one a separate maintenance liability, each one a different attack surface. The fix isn’t to ban MCP. It’s to give the org a single registry where MCP servers are declared, governed, and reused across teams. Without that, sprawl compounds week over week.

2. One-to-One API Wrappers Are Not Capabilities

Most “MCP servers” in the wild today are thin wrappers around a single REST API — every operation exposed as a tool, every request and response shape mirrored verbatim. That works in a demo. It collapses in production because agents don’t think in CRUD. They think in business outcomes.

The Head of AI needs MCP servers that combine multiple APIs into business-oriented capabilities — “issue a refund,” “onboard a vendor,” “rebalance a portfolio” — not “PUT /accounts/{id}.” Defining what a capability looks like, and how it gets governed once it spans three or four upstream systems, is one of the hardest unsolved standards problems in the space. There is no widely accepted shape for this yet, and every team is reinventing it.

3. Third-Party MCP Servers Are Adopted Without Governance

The vendor delivers a copilot. The copilot wants an MCP server for billing, for support, for inventory. The team turns them on without anyone asking who handles authentication, what data leaves the network, what happens at renewal, or what the rollback plan is if the vendor changes the contract.

The Head of AI wants to encourage third-party MCP adoption — that’s where the productivity actually lives — but they need a governed front door for it. Discovery, onboarding, credential management, scope enforcement, and an audit trail that satisfies the compliance team. Today most of that is happening in Slack threads and one-off contracts, which is not a scalable answer for an org with thirty teams and a growing vendor list.

4. There Is No AI FinOps

Spend is showing up everywhere — model API bills, MCP infrastructure, third-party copilots, the data platform tax that AI workloads keep amplifying — and nobody can produce a single view of the total cost of ownership. The Head of AI gets asked “what is AI costing us?” and has to assemble the answer from five different finance reports.

Worse, none of those reports tell you which capabilities are actually delivering value for the spend. Cloud FinOps took the better part of a decade to mature, and the AI version is starting from a worse position because the cost surface is wider and the metric layer is missing. Building a FinOps practice that connects model usage, MCP traffic, vendor contracts, and infrastructure into one defensible number is foundational work, and almost nobody has done it well yet.

5. Reuse Metrics Don’t Connect to Business Outcomes

Leadership wants to know whether the AI investment is producing reuse — whether one capability is being adopted across multiple teams, whether one MCP server is replacing fifteen one-off integrations. The Head of AI dutifully reports counts: number of capabilities published, number of teams consuming them, number of MCP calls per month.

Counts are activity, not outcome. The harder question — “is reuse actually helping the business close more deals, ship faster, retain more customers?” — needs a metric that connects capability consumption to business KPIs. Most orgs don’t have that connection wired, so reuse stays a vanity metric and the Head of AI struggles to defend the investment when the next budget cycle arrives.

Where This Is Heading

These five problems share a common thread: the Head of AI is being asked to govern a layer that didn’t exist three years ago, with tooling and conventions that are still being invented. The role is real. The discipline around it isn’t yet.

At Naftiko, we are building the capability layer that sits between enterprise APIs and AI agents — a single specification for what a governed MCP server is, how it gets composed from upstream APIs, how it is discovered and reused across teams, and how its consumption ties back to a metric leadership can defend. If you are working on any of these problems, we would like to hear from you.