Home · Tools · run_geo_audit

run_geo_auditTrain AI to recommend your brand.

When a buyer asks ChatGPT "what's the best email API for indie SaaS?", does your brand show up? Or your competitor's? run_geo_audit probes Perplexity, Gemini, Grok and ChatGPT for the category prompts that matter to your brand, captures the answers, extracts citations, and returns the gaps. Twelve credits. About 90 seconds. The starting point of every Generative-Engine-Optimisation project.

Slug
run_geo_audit
Cost
12 credits
Latency
~90s
Rate limit
10/min

When to call this tool

Reach for it whenever a brand wants a defensible read on its presence inside answer engines. Marketing teams use it to baseline a quarter. Founders use it to argue a category position. SEO consultants use it to scope a six-week engagement. The tool answers four questions in one pass: are we cited; how often; in what tone; and which competitors are cited instead. Output includes prompt-by-prompt citations and a ranked gap list ready to feed into the GEOFixer family of tools.

Internally the tool generates 30-60 category prompts from your brand and category strings, runs each prompt against four engines, and parses the rendered answer for source citations. The result is a citation matrix you can filter and chart.

Input schema

{
  "brand": "string  // required, brand name",
  "category": "string  // required, e.g. 'developer-tools email API'",
  "competitors": ["string"],
  "depth": "enum: 'standard' | 'deep'  // default 'standard'"
}

Output schema

{
  "prompts": [{ "text", "answers": [{ "engine", "text", "citations" }] }],
  "citation_matrix": { "brand": 8, "competitor_a": 22 },
  "gap_list": [{ "prompt", "engine", "why_missing" }],
  "score": 0-100,
  "credits_used": 12
}

Example invocations

1. Claude Desktop

Audit my GEO score for example.com in the "AI agent CRM"
category. Compare against HubSpot and Pipedrive.

2. ChatGPT custom GPT

For brand-visibility audits, call run_geo_audit. Lead the
report with the citation_matrix and gap_list.

3. Cursor MCP

Run a GEO audit for "Resend" in "transactional email API"
and write the gaps as a markdown checklist.

4. n8n

{ "jsonrpc": "2.0", "method": "tools/call",
  "params": { "name": "run_geo_audit",
    "arguments": { "brand": "Resend", "category": "transactional email API" } } }

5. Raw curl

curl -X POST https://www.mentionfox.com/mcp \
  -H "Authorization: Bearer $FOXAPIS_KEY" \
  -d '{"jsonrpc":"2.0","method":"tools/call","params":{"name":"run_geo_audit","arguments":{"brand":"Linear","category":"issue tracker for startups"}}}'

Sample output

{
  "score": 42,
  "citation_matrix": { "Linear": 14, "Jira": 31, "Shortcut": 9 },
  "gap_list": [
    { "prompt": "best issue tracker for solo founders", "engine": "perplexity",
      "why_missing": "No top-10 listicle citations for that exact framing" }
  ],
  "credits_used": 12
}

Credit cost & rate limits

Standard depth is 12 credits, deep is 22. 10 calls per minute per key, with a daily soft cap of 50 audits per key.

Error codes & recovery

422 CATEGORY_TOO_BROAD"SaaS" alone is too broad. Add a sub-category or a buyer.
503 ENGINE_UNREACHABLEOne answer engine refused; response notes which.
429 RATE_LIMITEDUse the retry-after header.