Local MCP: Giving Claude Eyes Into My Machine
Featured project
I was tired of copying and pasting file content into Claude just to get help with my own code. So I built an MCP server that gives Claude direct access to my local environment.
Local MCP: Giving Claude Eyes Into My Machine
If you are not using premium AI dev tools like Claude Code or Cursor, you have probably felt this friction.
You ask Claude for help. It needs context. So you start copying. Files, logs, queries, folder structures. Back and forth until it finally has enough to give a useful answer.
At that point, the problem is no longer the hard part. Getting the context into the model is.
That did not make sense to me.
So instead of copying my environment into Claude, I built a way for Claude to access it directly.
What Local MCP is
Local MCP is an HTTP server that implements Anthropic's Model Context Protocol. You run it locally inside Docker, and it exposes your machine — your files, your terminal, your git repos, your databases — as tools that any MCP-compatible AI agent can call directly.
You point Claude.ai at it through the connectors settings, and from that point Claude can read your files, run commands, check your git history, query your local Postgres database, inspect your running processes, and more. Without you copying anything.
That is the core idea. Instead of you bridging the gap between Claude and your environment manually, the server does it.
This is not just a Claude Code alternative
Claude Code is great if you are willing to pay and willing to stay within the Claude ecosystem. But that is also the limitation. You are locked in.
This MCP server is not tied to any specific agent. It is just an HTTP server that speaks the Model Context Protocol. That means it works with Claude.ai, Cursor, any API client you build yourself, or any other MCP-compatible agent that comes out tomorrow.
You set it up once. Any agent that supports MCP can connect to it. The server does not care who is asking.
That distinction matters. The value of this project is not just "Claude can now see my files." It is that you own the interface, and any AI that can speak MCP gets to use it.
How it works
Every request to /mcp spins up a fresh server instance, handles the request, and closes. No session state, no persistent connection. This stateless design is specifically what makes it work with Claude.ai, which does not support long-lived MCP sessions.
LLM Client (Claude / Cursor / API)
│ HTTPS POST /mcp
▼
[ngrok tunnel]
│
▼
Express HTTP Server (:3000)
│
├── POST /mcp → Stateless McpServer instance per request
├── GET /health → Server status
└── ...
│
▼
McpServer (fresh per request)
│
┌──────┼──────────────────────┬──────────┐
▼ ▼ ▼ ▼ ▼ ▼
filesystem git network shell system postgres
The codebase is TypeScript, structured so each tool category lives in its own folder under src/tools/. Each one exports a registration function. The server calls them all at startup. Adding a new tool is just adding a file — nothing else in the server needs to change.
What it can do
Filesystem — read, write, list, search, copy, move, delete. Claude can open any file in your project without you sending it. Binary files come back as base64.
Git — status, diff, log, branch, stage, commit, push, pull, clone, stash. A full git workflow callable from a conversation.
Shell — run commands, spawn background processes, tail their output, kill them, inspect environment variables. This is the one that feels like magic the first time you use it.
Network — HTTP requests, ping, DNS, port scanning, WHOIS, traceroute, file downloads. Useful when you are debugging connectivity issues and want Claude to actually check things rather than guess.
Postgres — list tables across schemas, describe table structure, run raw SQL. You pass the connection string per request. No credentials stored on the server, no global config.
System — disk usage, clipboard, notifications, screenshots, installed packages across npm, pip, brew, apt, and cargo.
The security side — and I will be honest about this
I want to be upfront: this project has real security limitations, and you should not run it on a production server or any machine you cannot afford to have compromised.
Here is what I have done to make it reasonable for personal use:
Every filesystem call goes through a path validation function that keeps operations inside FS_ROOT. Attempts to escape with ../ get blocked. The get_env tool automatically redacts anything that looks like a secret — keys, tokens, passwords. The Docker container runs as a non-root user. Your home directory is bind-mounted, so the server only sees what you explicitly expose.
Auth is off by default for local testing, but the middleware is there. Set MCP_AUTH_TOKEN in your environment and every request needs to include it in the headers.
But here is what I will not pretend:
The shell tool executes real commands on your machine. The SQL tool runs raw queries against your database. The path traversal protection is based on string validation — it is not bulletproof. The auth token, if enabled, is a single shared secret with no expiry, no rotation, no per-tool scope. If your ngrok URL leaks and your token is weak, someone could do serious damage.
I built this for my own development machine, with the understanding that I am the only one using it and that it stays off when I am not actively working. That is the context in which it is safe.
If you have ideas on how to make this more secure — better auth, tool-level permissions, request signing, anything — I am genuinely open to it. Open an issue or reach out directly. Security is the one area of this project I know needs more thinking, and I would rather admit that than pretend it is solved.
Getting started
git clone https://github.com/basit-devBE/local-mcp
cd local-mcp
npm install
docker compose up -d --build
ngrok http 3000
Copy the ngrok URL, go to Claude.ai → Settings → Connectors → Add MCP Server, paste it in with /mcp at the end.
That is it. The whole setup is about 10 minutes.
Why I think this matters
Premium AI developer tools are good. If you can afford them and they fit your workflow, use them.
But a lot of developers cannot or do not want to be locked into one ecosystem. And the copy-paste problem is real. Every time you manually ferry context between your environment and an AI assistant, you are doing work the AI should be doing.
This project closes that gap without requiring a subscription to anything. You run your own server, you control what gets exposed, and any MCP-compatible agent you use gets to take advantage of it.
The gap between "AI that responds to text" and "AI that actually knows your environment" is a lot smaller than it looks. It just takes a bit of setup.