MCP joined the Linux Foundation: 5 enterprise readiness upgrades
In December 2025, Anthropic donated the Model Context Protocol to the new Agentic AI Foundation
under the Linux Foundation. The foundation launched with founding contributions from Anthropic
(MCP), Block (goose), and OpenAI (AGENTS.md). In April 2026, the first
full MCP Dev Summit ran in New York City with the 2026 roadmap published the same week.
The governance change removes one of the top reasons enterprise procurement teams stalled on approving MCP servers in 2025: vendor lock-in concerns. Day-to-day technical direction still lives with the maintainers through the SEP process. What changed is the approval path inside companies. Legal can now treat MCP the way they treat Kubernetes: a Linux Foundation project with a governing board, a public trademark policy, and a neutral home.
What did not change: enterprise review of your individual MCP server. Those teams still want the same things they want from any internal API. Five upgrades take a hobby-grade MCP server to one that passes a procurement review. Each one ships in under a day.
Upgrade 1: scoped credentials, not one admin key
Most reference MCP servers ship with a single API key that gates the whole server. One leaked key grants every tool. Procurement teams see that and stop the review there.
The replacement: scope every tool, tie scopes to a principal, and enforce scope at the tool handler. The scope check is one line; the principal comes from your identity provider:
// packages/mcp-server/auth.ts
// Before: one admin key gates every tool
const ADMIN_KEY = process.env.MCP_ADMIN_KEY;
export function authorize(req: Request): boolean {
return req.headers.get("authorization") === `Bearer ${ADMIN_KEY}`;
}
// After: scopes gate individual tools and resources
export interface Principal {
subject: string; // "user_42", "agent_acme_review"
scopes: Set<string>; // "dns.read", "ssl.read", "email.write"
tenantId: string;
}
export async function authorize(req: Request): Promise<Principal> {
const token = req.headers.get("authorization")?.replace("Bearer ", "");
if (!token) throw new UnauthorizedError("missing token");
// Verify against Unkey, Auth0, or your identity provider
const result = await unkey.keys.verifyKey({ key: token });
if (!result.valid) throw new UnauthorizedError(result.code);
return {
subject: result.ownerId,
scopes: new Set(result.permissions ?? []),
tenantId: result.meta?.tenantId ?? "default",
};
}
export function requireScope(p: Principal, scope: string) {
if (!p.scopes.has(scope)) {
throw new ForbiddenError(`missing scope: ${scope}`);
}
} Every tool declares the scope it needs, and the handler enforces it before doing anything else:
// When a tool is registered, declare the scope it needs
server.registerTool({
name: "dns_a_record",
description: "Look up A records for a domain",
inputSchema: z.object({ domain: z.string() }),
annotations: {
readOnlyHint: true,
destructiveHint: false,
idempotentHint: true,
},
handler: async (input, ctx) => {
requireScope(ctx.principal, "dns.read");
const res = await fetch("https://api.botoi.com/v1/dns/lookup", {
method: "POST",
headers: { "X-API-Key": ctx.principal.apiKey },
body: JSON.stringify({ domain: input.domain, type: "A" }),
});
return await res.json();
},
});
Scopes map to a familiar pattern: dns.read, ssl.read,
shortener.write. Read scopes gate lookup tools; write scopes gate anything that
mutates state. Destructive tools get their own scope. A key issued to a low-trust agent gets
dns.read and nothing else; a key issued to an internal automation gets the full
*.write set.
Upgrade 2: structured audit logs for every tool call
SOC 2 CC7.2 asks who did what and when. ISO 27001 A.12.4 asks the same. HIPAA technical safeguards require an audit trail for access to protected data. An MCP server that accepts agent traffic is an API; procurement applies the same bar.
Structured audit logging with a durable sink and a real-time tap covers it:
import { randomUUID } from "node:crypto";
interface AuditEntry {
id: string;
timestamp: string;
actor: string; // principal.subject
tenant: string;
tool: string;
input: unknown;
outcome: "success" | "error";
latencyMs: number;
errorCode?: string;
traceId?: string; // links to your OTEL trace
consentId?: string; // ties back to approved user consent
}
export async function logToolCall(entry: Omit<AuditEntry, "id" | "timestamp">) {
const full: AuditEntry = {
id: randomUUID(),
timestamp: new Date().toISOString(),
...entry,
};
// 1. Durable append-only log (object storage, Kafka, or D1)
await appendAuditRow(full);
// 2. Real-time tap for SIEM and incident response
await publishAuditEvent(full);
}
// Wrap every tool handler
server.use(async (ctx, next) => {
const start = Date.now();
let outcome: "success" | "error" = "success";
let errorCode: string | undefined;
try {
await next();
} catch (err) {
outcome = "error";
errorCode = err.code ?? "unknown";
throw err;
} finally {
await logToolCall({
actor: ctx.principal.subject,
tenant: ctx.principal.tenantId,
tool: ctx.toolName,
input: redactSecrets(ctx.toolInput),
outcome,
latencyMs: Date.now() - start,
errorCode,
traceId: ctx.traceId,
consentId: ctx.consentId,
});
}
}); Two sinks, two purposes. The durable log (object storage, an append-only Kafka topic, or a D1 table) holds the evidence trail for audits and post-incident review. The real-time event bus drives SIEM detections and alerting. Redact secrets before you write; the audit is a compliance artifact, not a place to leak API keys.
Include the consentId when a tool call ran under explicit user consent (for
destructive operations, that is table stakes). It links the audit row back to the approval,
closing the loop on "was this action authorized?"
Upgrade 3: per-caller rate limits, not per-server
A server-wide rate limit does not protect you from one misbehaving agent. An agent in a retry loop consumes the whole quota in seconds and starves every other caller. Per-principal limits contain the blast radius.
import type { Principal } from "./auth";
interface TokenBucket {
tokens: number;
lastRefill: number;
}
const buckets = new Map<string, TokenBucket>();
function key(p: Principal, tool: string): string {
return `${p.tenantId}:${p.subject}:${tool}`;
}
export function enforceRateLimit(p: Principal, tool: string) {
// Burst 20 calls, refill 2/sec per tool per principal
const capacity = 20;
const refillRate = 2;
const id = key(p, tool);
let bucket = buckets.get(id);
const now = Date.now();
if (!bucket) {
bucket = { tokens: capacity, lastRefill: now };
buckets.set(id, bucket);
}
const elapsed = (now - bucket.lastRefill) / 1000;
bucket.tokens = Math.min(capacity, bucket.tokens + elapsed * refillRate);
bucket.lastRefill = now;
if (bucket.tokens < 1) {
throw new RateLimitError({
retryAfterSeconds: Math.ceil((1 - bucket.tokens) / refillRate),
});
}
bucket.tokens -= 1;
}
A token bucket per principal per tool gives you two knobs: burst capacity and sustained rate.
Agents are bursty; 20 calls in two seconds, then idle. Humans are steady. Both fit a token bucket
with different parameters. When the bucket drains, throw a RateLimitError with a
retryAfterSeconds hint so the agent can back off instead of retrying tight.
For production traffic, back the buckets with Redis, Unkey, or a Cloudflare KV counter. The in-memory map above is fine for single-process servers; anything clustered needs a shared store so one pod's usage counts against the same quota as another pod's.
Upgrade 4: expose long-running jobs with a two-tool pattern
The 2026 MCP roadmap calls out transport scalability including asynchronous task support. Until your transport ships that natively, do not make the agent wait 45 seconds on a single tool call. Ship two tools: one starts the job and returns an ID, the other polls status.
// Tool 1: start the job, return an ID fast
server.registerTool({
name: "start_security_audit",
description: "Run a full security audit across SSL, DNS, SPF, DMARC, and headers",
inputSchema: z.object({ domain: z.string() }),
annotations: { readOnlyHint: true, destructiveHint: false },
handler: async (input, ctx) => {
const jobId = await jobs.create({
tenantId: ctx.principal.tenantId,
kind: "security_audit",
input,
createdBy: ctx.principal.subject,
});
return {
jobId,
status: "running",
pollTool: "get_audit_status",
estimatedSeconds: 45,
};
},
});
// Tool 2: poll status by ID
server.registerTool({
name: "get_audit_status",
description: "Check the status of a security audit job",
inputSchema: z.object({ jobId: z.string().uuid() }),
annotations: { readOnlyHint: true, idempotentHint: true },
handler: async ({ jobId }, ctx) => {
const job = await jobs.get(jobId, { tenantId: ctx.principal.tenantId });
return {
status: job.status,
result: job.status === "complete" ? job.result : null,
error: job.status === "failed" ? job.error : null,
};
},
});
The agent kicks off the audit, gets an ID, and continues other work. Every 5 seconds (or on the
next model turn) it calls get_audit_status with the ID. When the status flips to
complete, the result payload is attached. This pattern survives network blips, fits
a reconnecting transport, and works today across every MCP client.
For jobs that can produce partial results (streaming audits, incremental scans), expose a third tool that returns whatever is done so far. The agent then decides whether to wait or act on the partial data.
Upgrade 5: a gateway or proxy in front of the server
Some responsibilities do not belong in your MCP server. Tenant-aware TLS, OIDC integration with SSO, response redaction, and fleet-wide policy enforcement are gateway concerns. Running them in the tool handler makes the server slower to change and harder to certify.
A config-driven gateway keeps the MCP server focused on tool execution:
# gateway.yaml: declarative policy for your MCP server
listeners:
- address: 0.0.0.0:8443
tls:
cert_file: /etc/certs/mcp.crt
key_file: /etc/certs/mcp.key
auth:
providers:
- name: oidc
issuer: https://auth.acme-corp.com
audience: mcp.internal
routes:
- path: /mcp
backend:
host: mcp-server.internal
port: 3000
policies:
- type: require_scope
per_tool:
dns_a_record: dns.read
ssl_certificate: ssl.read
start_security_audit: audit.write
- type: rate_limit
per_tenant:
burst: 200
refill_per_second: 20
- type: audit_log
sink: kafka://audit-bus.internal/mcp-tool-calls
- type: redact_response
patterns:
- $.data.private_ip
- $.data.internal_hostname The gateway terminates TLS, verifies OIDC tokens against your SSO, enforces per-tool scope requirements, applies rate limits per tenant, writes audit events to Kafka, and redacts response fields before they reach the agent. The MCP server behind it stays simple; it sees a verified principal and a clean request. Projects like Solo.io's Agent Gateway and the MCP reference gateway ship most of this off the shelf.
Bonus: honest tool annotations
MCP tool annotations (readOnlyHint, destructiveHint,
idempotentHint, openWorldHint) are advisory, but enterprise clients
read them. An agent configured to require human confirmation on destructive tools will prompt the
operator before calling anything marked destructiveHint: true. Get the annotations
right:
// Good annotations make destructive operations opt-in
server.registerTool({
name: "delete_short_url",
description: "Delete a short URL mapping by slug",
inputSchema: z.object({ slug: z.string() }),
annotations: {
readOnlyHint: false,
destructiveHint: true, // agents should get explicit consent first
idempotentHint: true,
openWorldHint: false, // only touches this server's state
},
handler: async ({ slug }, ctx) => {
requireScope(ctx.principal, "shortener.write");
await kv.delete(`short:${slug}`);
return { deleted: true, slug };
},
});
Rules of thumb: anything that mutates remote state is not readOnlyHint: true;
anything that deletes, cancels, or charges is destructiveHint: true; a lookup that
always returns the same answer for the same inputs is idempotentHint: true. Getting
these wrong is how agents turn into liability; getting them right is cheap.
The full enterprise-readiness checklist
| Upgrade | What procurement asks | Where it lives |
|---|---|---|
| Scoped credentials | "Can a leaked key access every tool?" | Auth middleware + per-tool scope check |
| Structured audit log | "Can you prove who called what and when?" | Wrapper middleware + durable sink |
| Per-caller rate limit | "Does one bad agent starve everyone else?" | Token bucket per principal per tool |
| Async job pattern | "What happens when a tool takes 45 seconds?" | Start tool + status tool + job store |
| Gateway in front | "How do you enforce fleet-wide policy?" | OIDC + scope + redact gateway |
| Honest annotations | "Which tools are destructive?" | Tool registration metadata |
Key takeaways
- Governance removed a procurement blocker. Neutral Linux Foundation stewardship means legal can treat MCP like any other Foundation project: fine to adopt, with standard terms.
- Scopes replace admin keys. Tie every tool to a scope, every principal to a scope set, and enforce at the handler. One-line check, big posture upgrade.
- Audit logs are non-negotiable. Durable sink plus real-time tap, with secrets redacted, consent IDs included.
- Per-principal rate limits protect tenants from each other. Token bucket per principal per tool; back it with a shared store in production.
- Expose async work as a two-tool pattern. Start + status beats blocking the agent on a 45-second call.
- A gateway pulls policy out of the server. OIDC, redaction, and fleet policies live in declarative config, not tool handlers.
Botoi's MCP server exposes 49 curated tools over Streamable HTTP at
api.botoi.com/mcp. It forwards API keys in headers, applies per-key rate limits, and
carries the tool annotations downstream clients read. See the
MCP setup page for
Claude Desktop, Claude Code, Cursor, VS Code, and Windsurf configs, or read the
API docs to
see how the five patterns above map onto a shipped server.
Frequently asked questions
- What is the Agentic AI Foundation and why does it matter for MCP?
- The Linux Foundation announced the Agentic AI Foundation (AAIF) with Anthropic's Model Context Protocol, Block's goose, and OpenAI's AGENTS.md as founding project contributions. AAIF puts MCP under vendor-neutral governance, which removes a common enterprise procurement blocker: the dependence on a single vendor for the protocol. Day-to-day technical direction still lives with the maintainers through the SEP process; governance matures without changing who ships the code.
- Does my MCP server need to be rewritten to pass enterprise review?
- No. Most servers need five additions, not a rewrite: scoped credentials instead of a single admin key, structured audit logging, per-caller rate limits, a gateway or proxy pattern for policy enforcement, and explicit tool annotations that make destructive operations opt-in. All five slot into an existing server that already speaks Streamable HTTP transport.
- Why do enterprises ask for audit logs on MCP tool calls?
- An AI agent running with an MCP server has the same effective permissions as any API caller. Audit trails prove which user, on which device, with which consent, invoked a destructive action and when. Without them, incident response cannot answer 'what changed' after a prompt injection or misbehaving agent. Most compliance frameworks (SOC 2 CC7, ISO 27001 A.12.4, HIPAA technical safeguards) already require this level of logging for human API access; enterprise review extends the same bar to agent access.
- How do I handle long-running jobs over MCP without blocking the agent?
- The MCP roadmap for 2026 lists transport scalability including asynchronous task support. Until your transport ships that natively, return a job ID immediately from the tool call, expose a second tool that polls status by ID, and keep the initial response under 30 seconds. The agent kicks off the job, continues other work, and checks back. This is the same pattern enterprise REST APIs have used for a decade; it maps cleanly to MCP tools.
- Should I put a gateway in front of my MCP server?
- Yes, once you add a second team or a second customer. A gateway handles three things your server should not: tenant-aware authentication, per-scope rate limiting, and request/response inspection for policy enforcement. You keep the MCP server focused on tool execution, and the gateway carries the compliance weight. Projects like Solo.io's Agent Gateway and the MCP reference gateway ship this pattern out of the box.
Try this API
IP Geolocation API — interactive playground and code examples
More guide posts
Start building with botoi
150+ API endpoints for lookup, text processing, image generation, and developer utilities. Free tier, no credit card.