Local MCP servers run on your machine. Remote MCP servers run on a backend somewhere. The right choice depends on what the server actually does. Here's the honest tradeoff.
Every MCP server has to live somewhere. Either it runs as a subprocess on your machine (stdio transport) or it's reachable over HTTP (Streamable HTTP transport). That one architectural decision — local or remote — shapes everything about how the server works, what it can access, who can share it, and what it costs.
The Model Context Protocol specification is neutral on this. Both transport modes are first-class. So the decision is yours, server by server, and it matters more than most people realize at first.
A local MCP server is a program that runs on your machine when your MCP client (Claude Code, Claude Desktop, Cursor, VS Code, etc.) starts it. It communicates over stdin/stdout — the stdio transport. It has access to whatever your user account has access to: your filesystem, your local processes, your SSH keys, your environment variables. Most MCP servers you install via npx or uvx are local.
A remote MCP server is a program running somewhere else — a VPS, a hosted service, a company's internal backend. Your MCP client talks to it over HTTP (Streamable HTTP is the current spec; SSE was the older way). The server has access to whatever it has credentials for: a SaaS API, a shared database, a company's cloud infrastructure. You don't run it; you authenticate to it.
Same protocol. Same tools-list / tools-call / resources / prompts semantics. Totally different operational model.
Local MCP servers are the right choice when:
Filesystem access. Local database queries. Git operations on a cloned repo. Running grep against files on your machine. These require access to resources that only exist on your computer. A remote server can't reach into your local filesystem, and you probably don't want it to.
Every standard MCP server in this category — filesystem, git, sqlite, fetch, memory — ships as a local stdio server for good reason. There's nothing to gain by making them remote.
Local servers don't send your tool invocations anywhere you didn't explicitly configure. If you're working on proprietary code, sensitive customer data, or anything under NDA, local servers keep that data on your machine. No server provider sees it. No network traffic. No audit trail beyond your own system logs.
For regulated industries (healthcare, finance, defense), this isn't just a preference — it's often a hard compliance requirement. Local MCP servers are the path of least resistance for anything covered by HIPAA, GDPR-restricted, or similar.
You wrote a 200-line MCP server that automates some specific workflow for your own machine. No one else needs to use it. No reason to host it anywhere. Local is fastest to iterate, easiest to debug, requires zero infrastructure.
A local MCP server has zero network round-trip. If the tool call is cheap compute (parse a file, run a regex, hash a string), local is measurably faster than remote even when the remote is on fast infrastructure.
An MCP server that queries your company's production database needs credentials to that database. You don't want every engineer's laptop holding DB credentials — that's a compliance nightmare and a credential-leak surface. Instead, the MCP server runs on company infrastructure, holds the credentials centrally, and authenticates users who connect to it.
Same story for any SaaS where you have one team API key: Stripe, Slack admin actions, AWS account-level operations. Remote-hosted server with scoped auth beats every-engineer-has-a-key by a wide margin.
A local MCP server lives on one developer's machine. Onboarding a new teammate means they install it, configure it, troubleshoot it on their specific OS. A remote server runs once and anyone on the team can connect to it with the shared URL + auth. Onboarding is a link + an API key.
For any MCP server that multiple people on a team will use — internal tools, shared dashboards, proprietary integrations — remote makes team collaboration dramatically easier.
A vector database. A web crawler with its own request pool. A server with heavy dependencies (Python ML models, large language models running locally, custom native extensions). A server that needs to scrape web content through rotating proxies.
Making every user install all of that on their laptop is a support nightmare. Remote-hosted, it's installed once on infrastructure that can actually carry it.
If you're building an MCP server for external customers to use — a SaaS vendor offering tool access to their platform, for example — remote is essentially required. Your customers aren't going to install your server on their machines; they want to authenticate and use it immediately.
Here's where the decision gets serious.
A local MCP server runs with your full user privileges. It can read anything you can read. It can write anywhere you can write. It can execute arbitrary commands as you. The sandbox is exactly the sandbox your user account has, which in most dev environments is nearly no sandbox at all.
That means: every local MCP server you install is effectively trusted code running in your user context. Vet what you install. The prompt-injection + tool-description attack surface is real — a malicious local MCP server can expose your SSH keys, read your environment variables, modify your shell config. This is why spec compliance and code audit matter.
A remote MCP server runs with its privileges, not yours. It can only access what the server operator has given it access to. You authenticate to it, and whatever you can do through the exposed tools is bounded by the server's own permission model. That's a stronger isolation boundary in principle — but it also means you have to trust the server operator, because now they can see your tool call patterns, the arguments you send, and the resources you request.
Neither is automatically safer. Local has broader blast radius if compromised; remote has narrower but different blast radius (you're trusting the operator, you're exposing metadata over the network).
| Use case | Recommendation |
|---|---|
| Filesystem, git, local DB, personal automation | Local |
| Company database access (production) | Remote |
| Shared SaaS team credentials (Slack, Stripe, Linear) | Remote |
| Web search / fetch / crawl | Either — local for personal, remote for team |
| Vector DB queries against a team-shared index | Remote |
| Proprietary code analysis on sensitive repos | Local |
| Publishing an MCP server to external users | Remote |
| Experimental / one-off personal tool | Local |
| Regulated data (HIPAA, GDPR-restricted, defense) | Local, almost always |
Most practical setups mix both. You have a dozen local MCP servers handling personal workflows, plus a handful of remote MCP servers for shared team resources. Your MCP client sees them all the same way — the protocol is neutral.
The friction point is usually management. Twelve servers configured across three machines and four MCP clients adds up fast. Hand-editing .claude.json and mcp-config.ts and .cursor/mcp.json and whatever VS Code wants is where MCP users spend their pain.
This is the problem we built mcp.hosting to solve. One web dashboard manages both local and remote servers. One local agent (@yawlabs/mcph) runs on your machine, fetches the current config from the cloud, executes local stdio servers locally, and proxies remote HTTP servers transparently. Your MCP client sees a consistent interface either way. Changes sync to every client you use.
But the point of this post isn't the tool — it's the underlying decision. Every MCP server you add is a local-or-remote choice. Making it deliberately, per server, based on what the server actually needs to do, is the difference between a clean setup and a security audit.
A sensible default policy for most developer workflows:
And always — regardless of local or remote — check the compliance grade of any MCP server before you install it. We published why compliance grading matters last week. A Grade F MCP server in local-stdio mode has as much access as anything else running as your user; in remote mode it's a vendor you're trusting with your tool-call metadata. Either way, verify the server behaves.
MCP is growing fast. The local-vs-remote decision is one of the more consequential ones developers make when adopting it. Worth doing deliberately.
Published by Yaw Labs.
Interested in AI tools and developer workflows? Token Limit News is our weekly newsletter.