compare_subjectsRank up to 50 subjects on any criteria.
When an agent has to choose — between candidates, between vendors, between potential acquisition targets, between three Series-A startups in the same category — what it actually needs is a side-by-side matrix. compare_subjects takes a list of up to 50 subjects and a free-text criteria description, runs the same scoring pass over each one, and returns a deduplicated ranked list with the underlying matrix attached. One call. Ten credits. The agent gets a defendable answer instead of three half-formed ones.
When to call this tool
The tool is purpose-built for one of the most common agent shapes: "given these N candidates, which is best, and why?" That shape shows up in recruiting (rank 12 finalists), in vendor selection (rank 8 SaaS tools on the same criteria), in journalism (rank 20 climate-tech founders by recent activity), in venture (rank 15 startups in a category by signal density). Without this tool, an agent does it the slow way: one read per subject, then a synthesis pass, then a justification pass. With it, the agent gets all three in one shot, with consistent scoring across subjects.
The criteria string is interpreted by the tool — you write the criteria the way you'd brief a junior analyst, and the tool maps it to the right scoring axes. "Senior engineering leaders with public-speaking presence and shipped infra at scale" becomes three measurable axes; "growth-stage SaaS in the developer tooling space with founder-led marketing" becomes three different ones.
Input schema
{ "subjects": ["string // 2-50 names, handles, or domains"], "subject_kind": "enum: 'person' | 'company' | 'product' | 'auto'", "criteria": "string // free-text, mapped to scoring axes", "weight_hints": "object // optional, per-axis weights 0-1", "include_matrix": "boolean // default true" }
Output schema
{ "axes": [{ "name", "weight", "description" }], "matrix": [{ "subject", "axis_scores": { "axis_name": 0-100 } }], "ranked_list": [{ "rank", "subject", "composite_score", "why" }], "sources": ["url"], "credits_used": 10 }
Example invocations
1. Claude Desktop
// User prompt
Compare Anthropic, OpenAI, and Google DeepMind as employers
for a senior systems engineer who values shipping over PR.
2. ChatGPT custom GPT
// GPT instructions
When the user gives you a list of options and asks "which is
best", call compare_subjects with their list and their stated
criteria. Always show the ranked_list with reasons before any
recommendation.
3. Cursor MCP
// In Cursor
Use compare_subjects to rank these 12 OSS metrics tools for
a small SaaS team. Output a markdown table.
4. n8n (HTTP Request fallback)
{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "compare_subjects", "arguments": { "subjects": ["Datadog", "Grafana", "New Relic"], "criteria": "open-source friendliness, total cost of ownership at 50 hosts" } } }
5. Raw curl
curl -X POST https://www.mentionfox.com/mcp \ -H "Authorization: Bearer $FOXAPIS_KEY" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"compare_subjects","arguments":{"subjects":["Stripe","Adyen","Checkout.com"],"criteria":"developer experience and Asia coverage"}}}'
Sample output (real, redacted)
{ "axes": [ { "name": "developer experience", "weight": 0.5 }, { "name": "asia coverage", "weight": 0.5 } ], "ranked_list": [ { "rank": 1, "subject": "Stripe", "composite_score": 87, "why": "Strongest documentation and SDK breadth. Solid Asia presence in JP/SG/IN." }, { "rank": 2, "subject": "Adyen", "composite_score": 79 }, { "rank": 3, "subject": "Checkout.com", "composite_score": 71 } ], "credits_used": 10 }
Credit cost & rate limits
Flat 10 credits for any list size from 2 to 50 subjects. Rate limit is 20 calls per minute per key. The list size is hard-capped at 50; pass more and the call returns 422 LIST_TOO_LARGE. For deeper per-subject reads, chain compare_subjects with vet_person on the top 3 ranked items.
Error codes & recovery
422 LIST_TOO_LARGEMore than 50 subjects. Split into batches and merge the matrices client-side.422 CRITERIA_VAGUEThe criteria string couldn't be mapped to scoring axes. Add a concrete attribute or two.409 KIND_MISMATCHSubject list mixes kinds (e.g. people and companies). Set subject_kind explicitly or split.429 RATE_LIMITEDBack off using x-foxapis-retry-after.