MCP vs REST API vs Markdown — How Agents Should Actually Consume Your Data

Three ways to hand data to an LLM agent: the Model Context Protocol, a boring REST API with an API key, or a curated Markdown file. Each is right some of the time and wrong a lot of the time. Here's the honest decision tree.

Banner

Prefer to watch or listen? ▶ YouTube ♫ Spotify ✈ Telegram

Someone on LinkedIn will tell you this week that Model Context Protocol is the future and if you're still shipping REST APIs you're building for a world that ended last year. Someone on Hacker News will tell you the same week that MCP is a protocol in search of a problem and that LLMs already know how to read a man page just fine. Someone on Twitter will quietly post their llms.txt file and say they're done thinking about it.

They can't all be right. But they're also not all wrong. Each of those three surfaces — MCP, a REST endpoint with an API key, a hand-curated Markdown file — is genuinely the best answer some of the time. The useful question is not which one wins. The useful question is: for this data, for this agent, for this freshness requirement, which surface costs the least to own and produces the fewest wrong answers?

This is a post about that tradeoff. Not a sales pitch for any of the three. A decision tree, with the math and the failure modes.


The Three Contenders, Honestly

Before you can pick, you need to know what each one is actually optimized for — not what the marketing says.

MCP — typed tools over a live connection

MCP is a stateful protocol that lets an agent discover what tools a server exposes at runtime, then call them with typed arguments and get typed results back. The session stays open; the model can call tool A, get a result, reason about it, then call tool B on the back of that result without re-authenticating, re-listing capabilities, or re-loading a schema.

What it's built for: agent ↔ system-with-actions. Read live data, write live data, run a workflow, trigger a side effect. The protocol pays for itself when the set of available actions is large, changes often, or needs to be discovered per-session.

What it's not: a good way to hand a model a document. MCP tool descriptions live in the context window. Connecting to a server with 30 tools burns a few thousand tokens per session on schemas the model probably doesn't need on any given turn.

REST + API key — the boring one that already works

A REST endpoint with an X-Api-Key header. You document it in the system prompt. The model calls it via a tool-use wrapper. The response comes back as JSON, the model reads the JSON, it continues.

What it's built for: one clearly-scoped action or query the model performs repeatedly. Look up a user, create a job, fetch yesterday's metrics. A predictable shape, a predictable cost, and — importantly — decades of training data. Every major LLM has seen a million curl examples. It knows what to do with an API.

What it's not: a good protocol when the shape of the interaction is unknown at design time. If you don't know which of 50 tools the agent will need, documenting them all inline quickly gets expensive — both in tokens and in cognitive load for the model.

Markdown (llms.txt and friends) — static, curated, cacheable

A hand-written or generated Markdown file. Often at /llms.txt or /docs/llms-full.txt. Loaded once per session into the system prompt. No tools. No protocol. Just text.

What it's built for: static context the agent needs a lot of, that doesn't change between turns. Your product documentation, your policy, your glossary, the shape of your data model. Cache it once, let the model use it for every follow-up question.

What it's not: a place to put anything live. If the data changes on any timescale the user cares about, Markdown is a trap — you're committing to regenerate it, and the agent has no way to know it's stale.


What You're Actually Optimizing

People frame this as a battle of protocols. It isn't. The underlying question is always the same:

How do I get the right information in front of the model at the right time, for the least token cost, with the best chance it stays correct?

That splits into four dimensions, and each of the three surfaces picks a different corner:

DimensionMCPREST + KeyMarkdown
Data freshnessLive (per-call)Live (per-call)Frozen at build time
Shape of interactionDiscoverable, manyFixed, fewRead-only, static
Token cost per sessionHigh (schema load)Low (one endpoint)Medium (full doc)
DebuggabilityJSON-RPC logscurlcat
Write side effectsYesYesNo

Every row of that table is a tradeoff someone is paying whether they realize it or not. The trick is to stop picking on ideology and start picking on which tradeoffs your use case can absorb.


The Token Cost Nobody Prints

This is where the MCP vs REST debate actually gets interesting, and it's the part most comparison posts skip.

Imagine a modest agent with 10 MCP servers connected, each exposing 5 tools. Each tool has a name, a description (~50 tokens), a JSON-schema for arguments (~100 tokens), a return schema (~50 tokens). That's ~200 tokens per tool, ~1,000 per server, ~10,000 tokens of schema loaded into the context window before the model does anything useful.

At Claude Sonnet 4.6 input pricing (~$3 per 1M tokens), that's $0.03 per session just on tool definitions. Multiply by 10,000 sessions a day and MCP has quietly become a $300/day line item with nothing to show for it yet.

The REST equivalent: one paragraph in your system prompt that says "to fetch X, POST to /api/x with header X-Api-Key, body {id}, returns {name, age}". Maybe 100 tokens. The model uses it the same way. You save 99% of the context tax.

The Markdown equivalent, for documentation-shaped data: you load the file once per session as a cached prefix. Prompt caching (Anthropic, Gemini, and OpenAI all ship it) drops the repeated-reads cost to a tenth. A 20k-token llms.txt cached is effectively free after the first query.

The brute honest take: MCP is the most token-expensive of the three per session. That's not a bug — you're paying for runtime tool discovery and protocol affordances. But you only want to pay for that when you're actually using it. A static integration with one known endpoint does not need MCP, and shipping MCP for it is a tax, not a feature.


The Composability Argument

This is the pro-REST argument people make on Hacker News, and it's stronger than MCP advocates want to admit.

LLMs have been trained on millions of shell scripts, curl examples, and language-specific HTTP clients. When you give Claude or GPT a REST endpoint and an API key, it already knows how to:

  • Chain calls (call A, feed the id from A into B).
  • Handle errors (read the status code, retry with backoff, surface the message).
  • Compose with non-API tooling (pipe the response into jq, filter, re-call).

MCP, being younger, has a thinner training base and a more specific set of patterns. When things go sideways inside an MCP session, the debugging story is read the JSON-RPC log, check the server process is still up, see if the transport closed. When things go sideways with a REST call, the debugging story is copy the curl command, run it, read the response. The second one is a lot friendlier to both humans and models.

This is not a fatal argument against MCP — it's an argument for not using MCP when REST would have done the job. The protocol adds value precisely when the interaction is dynamic enough that REST's "document the endpoint in the prompt" doesn't scale. Below that threshold, REST wins on every axis that matters.


Where Markdown Quietly Wins

The most underrated of the three surfaces is also the oldest and least glamorous.

A lot of what an agent needs from your system is not live state. It's static context: your API surface, your naming conventions, your data dictionary, your product tiers, your FAQ. None of that changes between the 9am and 10am call. Cache it once in the prompt, let the model read it on every turn, move on.

llms.txt — a Markdown file at the root of your site, structured for LLM consumption — is the emerging convention for this. It's dumb in the way good standards are dumb: no protocol, no server, no auth, no runtime discovery. Just a document.

When is Markdown the right answer?

  • Your content is a document the model needs to reason about, not an action the model needs to take.
  • The document is small enough to fit in context and stable enough that a daily regeneration is fine.
  • Your users are going to ask questions about this content — not with it. (If you need the agent to act on behalf of a specific user, you need an API, not a document.)

When is Markdown the wrong answer?

  • The data is personalized, live, or transactional. A llms.txt describing how to look up orders doesn't help; the agent still needs to actually look up the order.
  • The corpus is bigger than the context window. Now you're back to RAG or long-context stuffing, and llms.txt was never the right fit.

The unsexy truth is that most agent integrations need some Markdown and some API. The Markdown teaches the agent the shape of the world. The API lets it act in the world. MCP becomes relevant only when the set of actions is wide enough that the API itself needs discoverability.


The Hybrid Most Real Systems Actually Ship

The binary framing — MCP or REST or Markdown — is almost always wrong in production. Real systems end up layered:

[ Markdown / llms.txt ]   ← loads once, cached in prompt
         │
         ▼
 static context: API shape, data dictionary, policy, glossary
         │
         ▼
[ REST endpoints ]        ← called per-query for live data
         │
         ▼
 dynamic data: lookups, writes, actions with known shape
         │
         ▼
[ MCP server ]            ← optional, when action set is large
                            or dynamic enough to need discovery

This layering matters because each surface is doing a different job:

  • Markdown teaches the agent your domain without burning turns.
  • REST handles the 5–10 core actions that are known at design time.
  • MCP is the escape hatch for when the action set outgrows what you can enumerate in a system prompt.

The anti-pattern is picking one of the three and forcing everything through it: shipping every document as an MCP tool, or shipping every live lookup as a static Markdown export, or shipping every integration as a bespoke REST call when the set is clearly unbounded. Each of those is a bad fit dressed up as simplicity.


Decision Tree

Run this before you pick:

  1. Is the thing you're exposing a document the model should reason about?Markdown / llms.txt. Load it in the system prompt, let prompt caching do its job.
  2. Is it a small, fixed set of live actions (fewer than ~10) that you know at design time?REST + API key. Document the endpoints in the system prompt or as inline tools. Don't adopt MCP for five endpoints.
  3. Is the action set large, evolving, or dynamic enough that you'd rather the agent discover tools at runtime than you maintain a list?MCP. This is where the protocol earns its token tax.
  4. Do multiple agent clients (Claude Desktop, Cursor, your own) need the same surface?MCP. The portability is the feature.
  5. Is the data live and the set of actions fixed and the agent is your own code?REST. MCP is overkill; Markdown can't carry it.
  6. Still unsure? → Ship Markdown for the docs and REST for the one action you need now. You can always adopt MCP later; it's strictly additive.

And the meta-rule: the right surface is the one with the lowest total cost of ownership for this integration. That includes token cost, protocol-maintenance cost, debugging cost, and the ever-present cost of an agent doing the wrong thing because you handed it too much or too little.


Where This Leaves You

Two years from now, the three surfaces will probably coexist the same way HTTP, CLI, and static files coexist today. Nobody argues about whether a website should "use HTTP or the filesystem." They do both, for different jobs. MCP vs REST vs Markdown is the same kind of non-argument — a useful debate that stops being interesting as soon as you pick the right tool for each layer of your stack.

The mistake is treating protocol choice as identity. "We're an MCP shop." "We're REST-only." "Just put it in llms.txt." Each of those sentences describes a team that has prematurely collapsed a three-way tradeoff into a one-way bet. The builders shipping the best agent experiences right now are the ones who picked each surface on the merits, per integration, and moved on.

If you want a simpler heuristic: Markdown for what the agent needs to know, REST for what it needs to do, MCP for when the set of things it can do is too big to write down. That will serve you well until something better comes along — and something always does.

MCP vs REST API vs Markdown — How Agents Should Actually Consume Your Data | Vahid Aghajani