MCP vs. Agent Skills: 5 Surprising Truths About the New AI Infrastructure

MCP vs. Agent Skills: 5 Surprising Truths About the New AI Infrastructure

Modern AI agents are evolving from simple chatbots into autonomous coworkers. This research digest explores the comparison between Model Context Protocol (MCP) and Agent Skills — the two primary approaches to extending AI capabilities.

🎙️ The MCP and Agent Skills Showdown (45 min Deep Dive)
📺 5 Surprising Truths About the New AI Infrastructure
📑 Slide Deck: AI Agents: MCP vs. Skills

MCP vs. Agent Skills: 5 Surprising Truths About the New AI Infrastructure

The Agent Integration Dilemma

The rapid evolution of Large Language Models (LLMs) has reached a critical bottleneck. Modern models are "nailing" question-answering benchmarks like GPQA in physics and chemistry, while performance on real-world coding benchmarks like SWE-bench has surged from 20% to 80% in record time. Yet, despite this brilliance, LLMs remain fundamentally isolated.

Without a bridge to the "real world," an LLM is a scholar trapped in a library with no phone. It can solve complex theoretical problems but cannot see your private company wiki, read your internal support tickets, or take action to fix a bug in your production codebase. To transform from simple chatbots into functional agents, AI requires two things: access to tools and relevant context. Currently, two primary solutions—the Model Context Protocol (MCP) and Agent Skills—are competing for dominance. As an implementation strategist, understanding the architectural divergence between these two is critical for navigating what industry veterans are calling the most rapidly evolving landscape in 30 years of software engineering.

---

1. The Rise of "Ghost Software": Why Developers Are Abandoning the Resource API

When Anthropic introduced the Model Context Protocol (MCP), the specification established a clear distinction between "Tools" (actions the agent can take) and "Resources" (data the agent can read). However, a surprising trend has emerged in the developer community: the Resource API is becoming "ghost software."

In practice, developers are almost exclusively using the "Tools" API to handle both actions and data queries. Instead of navigating the segmented Resource infrastructure, they implement resource queries—such as fetching a specific document—as a standard tool call.

"What people have discovered is you can get this job done with the tools API... the community has just taken the solution that it likes," experts observe. This shift highlights a fundamental truth in AI implementation: community adoption often overrides original technical specifications in favor of a unified, intuitive "method call" approach.

2. The End of Token Waste: How Progressive Disclosure Cures "Context Rot"

One of the primary technical hurdles in AI integration is "context rot." In a standard MCP setup, when an agent initializes, the server often lists every available tool and its full, verbose schema. This creates token-heavy communication that fills the LLM’s limited context window with irrelevant information, increasing costs and degrading the model's reasoning performance.

Skills solve this through a tiered architecture called "Progressive Disclosure." Instead of a massive data dump, context is loaded only when specifically required, using three levels of granularity:

* Level 1: Metadata (Front Matter): At startup, only the "front matter" of a skill file is loaded—roughly 100 tokens. This provides the agent with a high-level description of the skill’s purpose. * Level 2: The Skill Body: If the agent determines a skill is relevant, it loads the full body (up to 5,000 tokens) containing specialized procedures. * Level 3: Assets and Scripts: Only when a specific task is initiated does the agent access deep-level resources, such as specific Python scripts or reference files.

From a strategic perspective, this "load-on-demand" approach is economically superior. It preserves the expensive context window for the actual task, ensuring the agent remains focused and efficient.

3. The Markdown Revolution: Democratizing Agent Expertise

A major divide between these technologies is the barrier to entry. MCP is a formal communication protocol following a client-server architecture. Building a custom MCP server requires writing backend code and managing a complex "client-server handshake" within the AI application.

In contrast, Skills are fundamentally Markdown files. They allow developers—and even non-technical domain experts—to encode complex logic using plain English. A Skill can contain natural language instructions for a specialized procedure, such as "validating SaaS ideas" or "landscaping a house."

"For MCP to actually build a custom MCP server is going to require you to write some code while for skills you just need to write it in plain English." This democratization means that a project manager can encode a complex MVP-scoping workflow into an agent’s capabilities without a software engineering degree. The complexity of the task remains high, but the format for encoding it is as simple as writing a README.

4. Local Mastery vs. Cloud Infrastructure: The CLI Bridge Solution

While there is overlap, MCP and Skills have different architectural orientations. Skills are currently optimized for local file systems and coding environments like Claude Code. They rely on the agent having a local code interpreter to execute scripts (bash, Python, or JS) directly on the machine.

MCP remains the "standard interface" for headless cloud applications and agentic microservices. However, a fascinating middle ground is emerging: the CLI Bridge. A local Skill can call a Command Line Interface (CLI), which then authenticates and interacts with cloud resources. This allows a local Skill to achieve "remote" effects without the infrastructure overhead of a full MCP server.

Strategically, the choice is clear: use MCP when building a centralized, scalable service that multiple agents need to access; use Skills for local, file-based logic that requires high-speed iteration and local script execution.

5. The "USBC" Standard and the Notion Paradox

Anthropic frequently refers to MCP as the "USBC port of AI," a universal interface that prevents every AI app from needing manual integrations for Slack, Notion, or Gmail. While the protocol is maturing with updates like streamable HTTP, implementation strategists must understand the "Notion Paradox" to choose the right tool for the job.

Consider a Notion integration: * The MCP Approach (Mechanical): Use the Notion MCP server for "mechanical" tool access. This allows the agent to perform basic tasks like listing pages or creating a new database entry. * The Skills Approach (Workflow): Use a Skill to define *how* to use those tools for complex workflows, such as "Analyzing User Interviews" or "Scoping an MVP."

Furthermore, while the recent addition of OAuth 2.1 support has simplified "secrets" management for locally running agents (via standard browser pop-ups), it remains a hurdle for headless cloud microservices. For implementation strategists, this means that while MCP is the standard for the future, the "OAuth solution" isn't a silver bullet for cloud-scale, headless agents just yet.

---

Conclusion: Navigating the 30-Year Speed Warp

The choice between MCP and Skills isn't a zero-sum game; it is a strategic decision between infrastructure and orchestration. MCP provides the standardized, robust interface for cloud-scale, mechanical tool access, while Skills offer a lightweight, natural-language way to manage complex workflows and local execution.

As natural language "Skills" begin to handle multi-step workflows with the ease of a Markdown file, we must ask: will the traditional "backend server" eventually disappear for many AI tasks, replaced entirely by local, file-based agent logic? In a landscape moving faster than any in the last 30 years, the winner won't be the technology with the most complex API, but the one that most efficiently bridges the gap between the LLM's "knowing" and the world's "doing."

📄 View Full Briefing Document (Technical Analysis)

MCP vs. Agent Skills: 5 Surprising Truths About the New AI Infrastructure

The Agent Integration Dilemma

The rapid evolution of Large Language Models (LLMs) has reached a critical bottleneck. Modern models are "nailing" question-answering benchmarks like GPQA in physics and chemistry, while performance on real-world coding benchmarks like SWE-bench has surged from 20% to 80% in record time. Yet, despite this brilliance, LLMs remain fundamentally isolated.

Without a bridge to the "real world," an LLM is a scholar trapped in a library with no phone. It can solve complex theoretical problems but cannot see your private company wiki, read your internal support tickets, or take action to fix a bug in your production codebase. To transform from simple chatbots into functional agents, AI requires two things: access to tools and relevant context. Currently, two primary solutions—the Model Context Protocol (MCP) and Agent Skills—are competing for dominance. As an implementation strategist, understanding the architectural divergence between these two is critical for navigating what industry veterans are calling the most rapidly evolving landscape in 30 years of software engineering.

---

1. The Rise of "Ghost Software": Why Developers Are Abandoning the Resource API

When Anthropic introduced the Model Context Protocol (MCP), the specification established a clear distinction between "Tools" (actions the agent can take) and "Resources" (data the agent can read). However, a surprising trend has emerged in the developer community: the Resource API is becoming "ghost software."

In practice, developers are almost exclusively using the "Tools" API to handle both actions and data queries. Instead of navigating the segmented Resource infrastructure, they implement resource queries—such as fetching a specific document—as a standard tool call.

"What people have discovered is you can get this job done with the tools API... the community has just taken the solution that it likes," experts observe. This shift highlights a fundamental truth in AI implementation: community adoption often overrides original technical specifications in favor of a unified, intuitive "method call" approach.

2. The End of Token Waste: How Progressive Disclosure Cures "Context Rot"

One of the primary technical hurdles in AI integration is "context rot." In a standard MCP setup, when an agent initializes, the server often lists every available tool and its full, verbose schema. This creates token-heavy communication that fills the LLM’s limited context window with irrelevant information, increasing costs and degrading the model's reasoning performance.

Skills solve this through a tiered architecture called "Progressive Disclosure." Instead of a massive data dump, context is loaded only when specifically required, using three levels of granularity:

* Level 1: Metadata (Front Matter): At startup, only the "front matter" of a skill file is loaded—roughly 100 tokens. This provides the agent with a high-level description of the skill’s purpose. * Level 2: The Skill Body: If the agent determines a skill is relevant, it loads the full body (up to 5,000 tokens) containing specialized procedures. * Level 3: Assets and Scripts: Only when a specific task is initiated does the agent access deep-level resources, such as specific Python scripts or reference files.

From a strategic perspective, this "load-on-demand" approach is economically superior. It preserves the expensive context window for the actual task, ensuring the agent remains focused and efficient.

3. The Markdown Revolution: Democratizing Agent Expertise

A major divide between these technologies is the barrier to entry. MCP is a formal communication protocol following a client-server architecture. Building a custom MCP server requires writing backend code and managing a complex "client-server handshake" within the AI application.

In contrast, Skills are fundamentally Markdown files. They allow developers—and even non-technical domain experts—to encode complex logic using plain English. A Skill can contain natural language instructions for a specialized procedure, such as "validating SaaS ideas" or "landscaping a house."

"For MCP to actually build a custom MCP server is going to require you to write some code while for skills you just need to write it in plain English." This democratization means that a project manager can encode a complex MVP-scoping workflow into an agent’s capabilities without a software engineering degree. The complexity of the task remains high, but the format for encoding it is as simple as writing a README.

4. Local Mastery vs. Cloud Infrastructure: The CLI Bridge Solution

While there is overlap, MCP and Skills have different architectural orientations. Skills are currently optimized for local file systems and coding environments like Claude Code. They rely on the agent having a local code interpreter to execute scripts (bash, Python, or JS) directly on the machine.

MCP remains the "standard interface" for headless cloud applications and agentic microservices. However, a fascinating middle ground is emerging: the CLI Bridge. A local Skill can call a Command Line Interface (CLI), which then authenticates and interacts with cloud resources. This allows a local Skill to achieve "remote" effects without the infrastructure overhead of a full MCP server.

Strategically, the choice is clear: use MCP when building a centralized, scalable service that multiple agents need to access; use Skills for local, file-based logic that requires high-speed iteration and local script execution.

5. The "USBC" Standard and the Notion Paradox

Anthropic frequently refers to MCP as the "USBC port of AI," a universal interface that prevents every AI app from needing manual integrations for Slack, Notion, or Gmail. While the protocol is maturing with updates like streamable HTTP, implementation strategists must understand the "Notion Paradox" to choose the right tool for the job.

Consider a Notion integration: * The MCP Approach (Mechanical): Use the Notion MCP server for "mechanical" tool access. This allows the agent to perform basic tasks like listing pages or creating a new database entry. * The Skills Approach (Workflow): Use a Skill to define *how* to use those tools for complex workflows, such as "Analyzing User Interviews" or "Scoping an MVP."

Furthermore, while the recent addition of OAuth 2.1 support has simplified "secrets" management for locally running agents (via standard browser pop-ups), it remains a hurdle for headless cloud microservices. For implementation strategists, this means that while MCP is the standard for the future, the "OAuth solution" isn't a silver bullet for cloud-scale, headless agents just yet.

---

Conclusion: Navigating the 30-Year Speed Warp

The choice between MCP and Skills isn't a zero-sum game; it is a strategic decision between infrastructure and orchestration. MCP provides the standardized, robust interface for cloud-scale, mechanical tool access, while Skills offer a lightweight, natural-language way to manage complex workflows and local execution.

As natural language "Skills" begin to handle multi-step workflows with the ease of a Markdown file, we must ask: will the traditional "backend server" eventually disappear for many AI tasks, replaced entirely by local, file-based agent logic? In a landscape moving faster than any in the last 30 years, the winner won't be the technology with the most complex API, but the one that most efficiently bridges the gap between the LLM's "knowing" and the world's "doing."


🔗 References

About the author
Subin Park

Subin Park

Principal Designer | Ai-Driven UX Strategy Helping product teams deliver real impact through evidence-led design, design systems, and scalable AI workflows.

Ai for Pro

Curated AI workflows, prompts, and playbooks—for product designers who build smarter, faster, and with impact.

Ai for Pro - Curated AI workflows and Product Design guides—built for Product Designers, PMs, and design leaders.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ai for Pro - Curated AI workflows and Product Design guides—built for Product Designers, PMs, and design leaders..

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.