How to give AI agents real-world tools with a single API
You're building an AI agent that helps users with technical tasks. The agent needs to look up DNS records, validate emails, generate QR codes, and check SSL certificates. LLMs can reason about these tasks, but they can't make network calls or generate images. Your agent needs tools.
The typical path is painful. You wire up one library for DNS, another for email validation, a third for QR codes. Each has its own auth, its own response format, its own error handling. Your agent's tool-execution layer becomes a patchwork of API clients.
A better approach: give the agent access to a single API that covers all of these capabilities. One auth token. One response format. One rate limit to track. This post shows how to wire the botoi API (150+ developer tool endpoints) into three agent architectures: Claude tool use, OpenAI function calling, and MCP.
The tool-use pattern in 30 seconds
Every major LLM provider now supports tool use (also called function calling). The pattern is the same across all of them:
- You define a set of tools with names, descriptions, and input schemas.
- You send a user message to the LLM along with the tool definitions.
- The LLM decides which tool to call and with what arguments.
- Your code executes the tool call (an HTTP request, a database query, a file read).
- You send the tool result back to the LLM.
- The LLM uses the result to formulate its final answer.
The loop looks like this in pseudocode:
// The tool-use loop: LLM reasons, picks a tool, you execute it
while (true) {
const response = await llm.chat(messages);
if (response.stop_reason === "tool_use") {
const toolCall = response.tool_calls[0];
const result = await executeToolCall(toolCall);
messages.push({ role: "tool", content: result });
} else {
return response.content; // Final answer
}
} The LLM never executes the tool itself. It produces structured output (tool name + arguments), and your code does the execution. This means the tools can be anything: a shell command, a database query, or an API call.
Why botoi's API maps well to the tool-use pattern
Each botoi endpoint is already shaped like a tool definition. Every endpoint takes a JSON input and returns a JSON output with a consistent structure. Here's what a DNS lookup tool definition looks like:
// Each botoi endpoint maps to a tool definition
// The OpenAPI spec at /openapi.json provides this automatically
{
"name": "dns_lookup",
"description": "Look up DNS records for a domain",
"parameters": {
"type": "object",
"properties": {
"domain": { "type": "string", "description": "Domain to query" },
"type": { "type": "string", "enum": ["A", "AAAA", "MX", "TXT", "CNAME", "NS"] }
},
"required": ["domain"]
}
} Three things make this work well for agents:
- Clear input schemas. Every endpoint accepts a small, well-defined JSON body. LLMs are good at producing structured JSON when the schema is tight.
- Consistent output format. All endpoints return
{ success: true, data: { ... } }or{ success: false, message: '...' }. Your agent's tool-result parser handles every endpoint the same way. - OpenAPI spec for auto-discovery. The spec at api.botoi.com/openapi.json contains full schemas for all 150+ endpoints. You can programmatically generate tool definitions from it instead of writing them by hand.
Architecture 1: Claude tool use with the Anthropic SDK
Claude's tool-use API lets you pass tool definitions alongside your messages. When Claude decides to call a tool, it returns a tool_use content block with the tool name and input. You execute the call and send the result back as a tool_result.
Here's a working agent that can look up DNS records, check SSL certificates, and validate emails using botoi:
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const BOTOI_KEY = process.env.BOTOI_API_KEY;
// Define botoi endpoints as Claude tools
const tools = [
{
name: "dns_lookup",
description: "Look up DNS records (A, MX, TXT, etc.) for a domain",
input_schema: {
type: "object",
properties: {
domain: { type: "string", description: "Domain to query" },
type: { type: "string", enum: ["A", "AAAA", "MX", "TXT", "CNAME", "NS"] },
},
required: ["domain"],
},
},
{
name: "ssl_check",
description: "Check SSL certificate and security headers for a domain",
input_schema: {
type: "object",
properties: {
url: { type: "string", description: "Domain or URL to check" },
},
required: ["url"],
},
},
{
name: "email_validate",
description: "Validate an email address (syntax, MX, disposable check)",
input_schema: {
type: "object",
properties: {
email: { type: "string", description: "Email address to validate" },
},
required: ["email"],
},
},
];
// Map tool names to botoi API endpoints
const toolEndpoints = {
dns_lookup: "/v1/dns/lookup",
ssl_check: "/v1/ssl",
email_validate: "/v1/email/validate",
};
async function callBotoiTool(name, input) {
const endpoint = toolEndpoints[name];
const res = await fetch(`https://api.botoi.com${endpoint}`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${BOTOI_KEY}`,
},
body: JSON.stringify(input),
});
return await res.json();
}
async function runAgent(userMessage) {
const messages = [{ role: "user", content: userMessage }];
while (true) {
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
tools,
messages,
});
// If Claude wants to use a tool, execute it and feed the result back
if (response.stop_reason === "tool_use") {
const toolBlock = response.content.find((b) => b.type === "tool_use");
const result = await callBotoiTool(toolBlock.name, toolBlock.input);
messages.push({ role: "assistant", content: response.content });
messages.push({
role: "user",
content: [
{
type: "tool_result",
tool_use_id: toolBlock.id,
content: JSON.stringify(result),
},
],
});
} else {
// Claude is done; return the final text
return response.content[0].text;
}
}
}
// Usage
const answer = await runAgent(
"Check the DNS records and SSL certificate for stripe.com"
);
console.log(answer); Ask this agent "Check the DNS records and SSL certificate for stripe.com" and Claude will make two tool calls in sequence, then synthesize the results into a readable summary. The agent handles multi-step reasoning automatically; Claude picks which tools to call, in what order, based on the user's question.
Architecture 2: OpenAI function calling
OpenAI's function calling follows the same pattern with different field names. Tools are defined under a tools array with type: "function". The model returns tool_calls when it wants to execute a function.
One difference: GPT can request multiple tool calls in a single response. The code below handles parallel tool execution:
import OpenAI from "openai";
const openai = new OpenAI();
const BOTOI_KEY = process.env.BOTOI_API_KEY;
const tools = [
{
type: "function",
function: {
name: "dns_lookup",
description: "Look up DNS records for a domain",
parameters: {
type: "object",
properties: {
domain: { type: "string" },
type: { type: "string", enum: ["A", "AAAA", "MX", "TXT", "CNAME", "NS"] },
},
required: ["domain"],
},
},
},
{
type: "function",
function: {
name: "qr_generate",
description: "Generate a QR code SVG from text or a URL",
parameters: {
type: "object",
properties: {
text: { type: "string", description: "Content to encode" },
size: { type: "number", description: "Size in pixels (100-1000)" },
},
required: ["text"],
},
},
},
];
const toolEndpoints = {
dns_lookup: "/v1/dns/lookup",
qr_generate: "/v1/qr/generate",
};
async function callBotoiTool(name, args) {
const endpoint = toolEndpoints[name];
const res = await fetch(`https://api.botoi.com${endpoint}`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${BOTOI_KEY}`,
},
body: JSON.stringify(args),
});
return await res.json();
}
async function runAgent(userMessage) {
const messages = [{ role: "user", content: userMessage }];
while (true) {
const response = await openai.chat.completions.create({
model: "gpt-4o",
tools,
messages,
});
const choice = response.choices[0];
if (choice.finish_reason === "tool_calls") {
messages.push(choice.message);
for (const call of choice.message.tool_calls) {
const args = JSON.parse(call.function.arguments);
const result = await callBotoiTool(call.function.name, args);
messages.push({
role: "tool",
tool_call_id: call.id,
content: JSON.stringify(result),
});
}
} else {
return choice.message.content;
}
}
}
const answer = await runAgent(
"Generate a QR code for https://botoi.com and look up the MX records"
);
console.log(answer);
GPT-4o can call both dns_lookup and qr_generate in parallel when the tasks are independent. The loop processes all tool calls before sending results back to the model.
Architecture 3: MCP-based agents
The Model Context Protocol (MCP) is a different approach. Instead of defining tools in your code, the agent discovers tools from an MCP server at runtime. Botoi runs an MCP server at api.botoi.com/mcp with 49 curated tools.
This is the zero-code option. No tool definitions to write. No execution layer to build. The MCP client (Claude Desktop, Cursor, Claude Code, VS Code) connects to the server, discovers the tools, and handles execution.
// Claude Desktop, Cursor, or VS Code: add to your MCP config
{
"mcpServers": {
"botoi": {
"type": "streamable-http",
"url": "https://api.botoi.com/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
// Claude Code: one command
// claude mcp add botoi --transport streamable-http https://api.botoi.com/mcp
After adding this config, your AI assistant can call any of the 49 tools by name. Ask "look up the MX records for github.com" and the assistant calls the lookup_dns tool, passes the domain and record type, and returns structured JSON.
MCP is the right choice when you're using an AI assistant interactively (in an IDE or chat client). Function calling is the right choice when you're building a programmatic agent that runs autonomously.
Why a single API matters for agents
When you wire up tools for an agent, the tool-execution layer is the part that breaks in production. Each external API you add introduces its own failure mode. Consider what happens when your agent uses five different APIs:
- Five API keys to rotate and store securely.
- Five rate limits to track independently.
- Five response formats to normalize before feeding results back to the LLM.
- Five error-handling paths with different status codes and error shapes.
- Five billing dashboards to monitor.
With a single API, your tool-execution function reduces to one pattern:
// Shared helper: route any tool call to the right botoi endpoint
async function executeBotoiTool(name, input) {
const ENDPOINTS = {
dns_lookup: "/v1/dns/lookup",
ssl_check: "/v1/ssl",
email_validate: "/v1/email/validate",
qr_generate: "/v1/qr/generate",
ip_lookup: "/v1/ip/lookup",
hash_generate: "/v1/hash",
jwt_decode: "/v1/jwt/decode",
pii_detect: "/v1/pii/detect",
whois_lookup: "/v1/whois",
token_count: "/v1/token/count",
};
const path = ENDPOINTS[name];
if (!path) throw new Error("Unknown tool: " + name);
const res = await fetch(`https://api.botoi.com${path}`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.BOTOI_API_KEY}`,
},
body: JSON.stringify(input),
});
if (!res.ok) {
const err = await res.json();
return { error: err.message || "API call failed" };
}
return await res.json();
}
Every tool call goes through the same auth header, the same error shape, the same rate limit. Adding a new tool means adding one line to the ENDPOINTS map. No new dependencies, no new credentials.
Picking the right tools for your agent
Don't register all 150+ endpoints as tools. LLMs perform worse when the tool list is long because they have to reason over more options. Pick the 5-15 tools your agent needs for its specific use case.
Some agent archetypes and the tools that fit them:
- Infrastructure monitoring agent: DNS lookup, SSL check, HTTP headers, site performance check, uptime check, IP lookup
- Email security auditor: SPF check, DMARC check, DKIM check, MX record check, email validation, disposable email check
- Data processing agent: JSON format, CSV to JSON, XML to JSON, Base64 encode/decode, HTML to Markdown, PII detection
- Developer assistant: JWT decode, hash generation, UUID generation, cron parse, regex test, token count
Start narrow. Add tools when your agent's users request capabilities it can't handle. Monitor which tools get called and remove the ones that never fire.
Key points
- LLMs reason; tools act. The tool-use pattern separates the LLM's planning from the execution of real-world actions. Your agent needs reliable tools to bridge that gap.
- One API, one execution path. A single API with consistent auth, response format, and error handling simplifies the tool-execution layer that every agent needs.
- Three architectures, same API. Claude tool use, OpenAI function calling, and MCP all work with botoi endpoints. Pick the one that matches your deployment model.
- Keep the tool list small. Register 5-15 tools per agent. Too many options degrade the LLM's tool-selection accuracy.
- MCP for interactive use, function calling for autonomous agents. MCP handles tool discovery and execution for you. Function calling gives you full control over the loop.
The API docs list every endpoint with request/response schemas. The OpenAPI spec lets you generate tool definitions programmatically. The MCP tool manifest shows the 49 curated tools available via MCP.
Frequently asked questions
- Can I use the botoi API with any LLM, not only Claude and GPT?
- Yes. The API is a standard REST API that returns JSON. Any LLM framework that supports function calling or tool use (LangChain, LlamaIndex, Vercel AI SDK, CrewAI) can call botoi endpoints as tools. The OpenAPI spec at /openapi.json provides the schema definitions.
- How many tools can an agent access through botoi?
- The REST API has 150+ endpoints. The MCP server exposes 49 curated tools. For function calling with Claude or GPT, you pick which endpoints to register as tools based on your agent's use case.
- Does the API require authentication for agent use?
- Anonymous access works at 5 requests per minute and 100 requests per day, rate-limited by IP. For production agents, get an API key at botoi.com/api. The free tier requires no credit card.
- What is MCP and how does it differ from function calling?
- MCP (Model Context Protocol) is a standard for connecting AI assistants to external tools. The assistant discovers available tools from the MCP server and calls them by name. Function calling requires you to define tool schemas in your code. MCP handles discovery and invocation automatically.
- Can I self-host the botoi API for lower latency?
- The API runs on Cloudflare Workers at the edge, so requests route to the nearest data center globally. Response times are under 50ms for computation-only tools. Self-hosting is not available, but the edge deployment means latency is comparable to self-hosted solutions.
Try this API
DNS Lookup API — interactive playground and code examples
More guide posts
Start building with botoi
150+ API endpoints for lookup, text processing, image generation, and developer utilities. Free tier, no credit card.