GEON GEON
SEO & GEO 3 days ago 6 min

Stop Opening the GEO Dashboard: Wire GEON's MCP Server Into Your AI Client

Three steps to wire GEON into Claude Desktop or Cursor — visibility scores, scans, and action triage become chat-native instead of a dashboard tab you forget to open.

Stop Opening the GEO Dashboard: Wire GEON's MCP Server Into Your AI Client

Adding GEON's MCP server to Claude Desktop or Cursor takes three steps: open your client's MCP config file, paste the GEON server entry, and authenticate once via OAuth. After that, visibility scores, scans, and prompt audits are one chat sentence away — your AI client calls GEON's tools directly, so GEO checks happen where you already work instead of in a separate dashboard tab.

Connect GEON MCP in 3 Steps

Step 1. Open your client's MCP config. In Claude Desktop, that's claude_desktop_config.json, reachable via Settings → Developer → Edit Config. In Cursor, it's mcp.json, scoped per-project under .cursor/ or globally under ~/.cursor/.

Step 2. Add a GEON server entry pointing at the OAuth-protected resource endpoint. The full snippet sits a few sections below — it's about six lines of JSON.

Step 3. Restart the client. On first call to a GEON tool, your browser opens for OAuth consent. Grant access once and the token persists. From there on, every visibility check, scan, and action update is a chat-native call.

The payoff: GEO measurement collapses into the surface where you're already prompting, drafting, and reviewing. No second tab, no copy-paste, no dashboard ritual.

What MCP Actually Is, in 60 Seconds

The Model Context Protocol is an open standard Anthropic introduced in November 2024 for connecting AI assistants to external tools and data sources. Think of it as USB for AI clients: any compatible client (Claude Desktop, Cursor, Zed, custom agents) can talk to any compatible server, and the wiring is the same regardless of vendor.

Authentication runs on OAuth 2.0. MCP servers advertise authorization metadata via the Protected Resource Metadata endpoint defined in RFC 9728, which is what clients fetch from /.well-known/oauth-protected-resource. You don't manage tokens by hand — your client handles the dance.

For GEO teams, the practical shift is this: your visibility data stops being a screen you have to remember to open and becomes a tool the model can call. The model picks the right tool from your natural-language intent.

The Config Snippet That Wires It Up

For Claude Desktop, edit claude_desktop_config.json and add a geon entry under mcpServers:

{
  "mcpServers": {
    "geon": {
      "url": "https://usegeon.com/api/mcp"
    }
  }
}

Claude Desktop's user quickstart documents the file's location on each OS — macOS hides it in ~/Library/Application Support/Claude/, Windows under %APPDATA%.

For Cursor, the equivalent file is mcp.json. Drop it at ~/.cursor/mcp.json for global access or .cursor/mcp.json inside a project for repo-scoped tools. The JSON shape mirrors Claude Desktop's. Cursor's MCP docs cover the per-project vs global scoping if you want different tool sets per workspace.

Restart the client. On first GEON tool call, OAuth consent fires in your browser. After approval, the token is cached and tools are callable from any chat in that client. To verify which tools your client actually picked up, GEON publishes the live tool reference.

The GEON Toolset Through the Eyes of Your AI Client

Twelve tools, grouped by intent:

Status & sanity. check_health confirms the server is reachable. get_credits tells you how many scans you can still run this period. get_workspace_details and list_workspaces cover multi-brand setups — you can ask "which workspace am I in?" without leaving chat.

Measurement. get_visibility_score returns the current week's number. get_scan_history returns trend data for delta math. list_prompts shows which queries GEON is tracking on your behalf so you can spot gaps.

Action. trigger_scan runs a fresh full scan on demand. scan_single_prompt runs a one-off scan against a specific query — useful when you want to test a new long-tail prompt without committing it to your tracked set. check_website_readiness audits a target URL for citation-worthiness. list_actions and update_action_status close the loop on remediation work.

The point isn't memorization. The model picks the tool from your sentence. You ask "are we still cited for 'best CRM'?" and it picks scan_single_prompt. You ask "what's our score this week?" and it picks get_visibility_score. The chat is the interface.

Three Conversations You Can Run Today

The morning check-in.

You: What's our visibility score this week vs last?

Assistant: Calling get_visibility_score and get_scan_history

This week: 0.62 · Last week: 0.55 · Delta: +0.07. Citation count rose on Perplexity but dipped slightly on Google AI Overviews. Want the prompt-level breakdown?

The prompt audit.

You: Run a scan on the prompt: "best CRM for solo founders".

Assistant: Calling scan_single_prompt

ChatGPT mentioned the brand once (position 3). Perplexity cited the docs page directly. Gemini didn't surface a brand mention this run. Should I queue a check_website_readiness on the CRM landing page?

The action loop.

You: List open actions and close the ones I shipped this week.

Assistant: Calling list_actions

7 open actions. 3 are tagged "shipped" in the linked PRs. Updating those to closed via update_action_status — done. 4 remaining: 2 schema fixes, 1 hreflang issue, 1 cover-image refresh.

Three turns. No dashboard. No tab-switching. The conversation is the workflow.

From Dashboard Discipline to Daily Reflex

GEO loses momentum the moment checking it requires a context switch. You finish a brief, mean to verify visibility, and the tab is two clicks away — so you skip it. A week passes. Patterns drift.

MCP collapses the switch. A few practical patterns worth stealing:

  • Pin a "GEO standup" system prompt that asks the model to call get_visibility_score and flag regressions before your morning standup. The model becomes a watchdog you don't have to remember.
  • Trigger a scan after every publish. Your CMS workflow ends with one chat turn: "publish this and run trigger_scan on the relevant prompts."
  • Use list_actions as triage. Open chat, ask for the list, close what shipped. No Kanban round-trip.

Limits worth naming. Tool calls carry latency — expect 2-5 seconds per round-trip. OAuth tokens expire and re-auth periodically. Scans consume credits, so trigger_scan and scan_single_prompt are rate-limited by your plan tier; check get_credits before queuing a batch. Plan sizing lives on GEON's pricing page.

The next horizon is composition: combine GEON's MCP server with your CMS or publishing MCP servers (Notion, Webflow, Linear, Sanity). One chat turn becomes "publish this post, scan the new prompts, log the action." That's the shape of GEO once it's wired into the surface where the work already happens.

If you'd rather skip MCP and call the underlying primitives directly, GEON's REST API is the non-MCP path — same data, more code.

Deniz

Deniz

Content & GEO Strategy