REST vs GraphQL vs gRPC: a decision framework for 2026
Your team starts a new service. Someone opens a Slack thread: "Should we use GraphQL?" Someone else replies with a link to a gRPC benchmark. The thread splits into three camps. Two hours later, no decision.
The problem is not lack of information. The problem is lack of criteria. REST, GraphQL, and gRPC each solve a different shape of problem. Pick the wrong one and you pay the tax on every request for years. Pick the right one and the architecture fades into the background.
This guide gives you a concrete decision framework, not "it depends." Each protocol gets a specific use case where it wins, a code example you can run, and a comparison table you can drop into a design doc.
The 30-second framework
Start with three questions:
- Who calls this API? External developers, your own frontend clients, or internal services?
- How many data shapes does the client need? One fixed shape, or many variations?
- What matters more: cacheability, query flexibility, or raw throughput?
The answers map directly to a protocol:
- REST wins when the audience is external, the data shape is fixed, and caching matters.
- GraphQL wins when multiple clients need different slices of the same data graph.
- gRPC wins when internal services talk to each other and throughput matters more than human readability.
Here is the same logic as code:
function pickProtocol(context) {
// Public API consumed by third-party developers?
if (context.audience === "external") return "REST";
// Clients need flexible, nested queries?
if (context.queryComplexity === "high") return "GraphQL";
// Internal service-to-service with strict contracts?
if (context.environment === "internal"
&& context.needsStreaming) return "gRPC";
// Internal but simple request/response?
if (context.environment === "internal") return "gRPC";
// Default: REST
return "REST";
} REST: the universal default
REST maps operations to HTTP verbs and resources to URLs. Every programming language has an HTTP client.
Every CDN understands cache headers. Every developer has used curl.
REST is the right choice for public APIs where you control the data shape and clients expect stable, documented endpoints. botoi's 150+ API endpoints all use REST. Here is a DNS lookup:
curl -X POST https://api.botoi.com/v1/dns/lookup \
-H "Content-Type: application/json" \
-d '{"domain": "github.com", "type": "A"}' Response:
{
"success": true,
"data": {
"domain": "github.com",
"type": "A",
"records": [
{ "value": "140.82.121.3", "ttl": 60 },
{ "value": "140.82.121.4", "ttl": 60 }
]
}
}
The request is a single POST. The response is a flat JSON object. You can test it with curl, pipe it
through jq, or call it from any language with fetch. No schema file to compile,
no query language to learn, no code generation step.
const res = await fetch("https://api.botoi.com/v1/dns/lookup", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ domain: "github.com", type: "A" }),
});
const data = await res.json();
console.log(data.data.records); Where REST shines
- Public developer APIs. Third-party developers expect REST. The onboarding cost is zero.
- Cacheable resources. HTTP caching works at every layer: browser, CDN, reverse proxy. A
GET /users/123response with proper cache headers costs nothing on repeat requests. - Webhook integrations. Webhooks are HTTP POST requests. REST fits the mental model.
- Simple CRUD operations. When each endpoint does one thing with one input shape and one output shape, REST adds no overhead.
Where REST falls short
- Over-fetching. A mobile app that needs 3 fields from a user profile still downloads the full 40-field object.
- Under-fetching. A dashboard that shows a user, their team, and their recent activity makes 3 sequential HTTP calls. Latency adds up.
- No built-in schema evolution. You version URLs (
/v1/,/v2/) or add fields and hope clients ignore unknown keys.
GraphQL: client-driven queries
GraphQL lets the client specify exactly which fields it needs in a single request. The server exposes a typed schema. The client writes a query against that schema and gets back a response shaped to match.
GitHub's public API demonstrates this well. One query fetches your username and top 3 repositories with star counts and primary language. In REST, this would require at least 2 calls.
# GitHub GraphQL API
curl -X POST https://api.github.com/graphql \
-H "Authorization: bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"query": "{ viewer { login repositories(first: 3, orderBy: { field: STARGAZERS, direction: DESC }) { nodes { name stargazerCount primaryLanguage { name } } } } }"
}' Response:
{
"data": {
"viewer": {
"login": "octocat",
"repositories": {
"nodes": [
{ "name": "hello-world", "stargazerCount": 2841, "primaryLanguage": { "name": "Ruby" } },
{ "name": "git-consortium", "stargazerCount": 1204, "primaryLanguage": { "name": "Go" } },
{ "name": "linguist", "stargazerCount": 987, "primaryLanguage": { "name": "Python" } }
]
}
}
}
}
The client asked for name, stargazerCount, and primaryLanguage.
The server returned exactly those fields. No extra data transferred. No second request.
Where GraphQL shines
- Mobile apps. Bandwidth is limited. Payload size matters. GraphQL eliminates over-fetching on every screen.
- Dashboards and aggregation views. A single query can pull data from users, orders, and inventory in one round trip.
- Rapid frontend iteration. Frontend teams change their queries without waiting for backend teams to build new endpoints.
- Strong typing. The schema is the contract. Code generation tools like GraphQL Code Generator produce TypeScript types from it.
Where GraphQL falls short
- Caching. Every query is a POST to
/graphql. HTTP caching at the CDN or browser level does not work without a persisted-query layer or GET-based queries. - Security surface. Clients can write expensive queries that join deeply nested data. You need query cost analysis and depth limiting to prevent abuse.
- Learning curve. Developers need to learn the query language, schema design, resolvers, and DataLoader patterns. The ramp-up time is higher than REST.
- N+1 queries. Naive resolver patterns trigger one database query per item in a list. DataLoader batching fixes this, but you must build it yourself.
gRPC: internal speed
gRPC uses Protocol Buffers for serialization and HTTP/2 for transport. You define your service
contract in a .proto file, generate client and server code, and get type-safe RPC calls
with binary payloads.
syntax = "proto3";
package payment;
service PaymentService {
rpc ChargeCard (ChargeRequest) returns (ChargeResponse);
rpc RefundCharge (RefundRequest) returns (RefundResponse);
rpc StreamTransactions (TransactionFilter) returns (stream Transaction);
}
message ChargeRequest {
string customer_id = 1;
int64 amount_cents = 2;
string currency = 3;
string idempotency_key = 4;
}
message ChargeResponse {
string charge_id = 1;
string status = 2;
int64 created_at = 3;
}
From this definition, protoc generates client stubs and server interfaces in Go, Java,
Python, Rust, C++, or a dozen other languages. The generated code handles serialization, deserialization,
and HTTP/2 framing.
Where gRPC shines
- Service-to-service communication. Internal microservices that exchange high-frequency messages benefit from binary serialization and multiplexed streams.
- Strict contracts. The
.protofile is the single source of truth. Breaking changes are caught at compile time, not at runtime. - Bidirectional streaming. gRPC supports server streaming, client streaming, and bidirectional streaming. Real-time features like live transaction feeds fit naturally.
- Polyglot environments. A Go service can call a Python service through generated stubs with zero manual serialization code.
Where gRPC falls short
- Browser support. Browsers cannot make native gRPC calls. The grpc-web proxy adds a layer of complexity and latency.
- Human readability. Binary payloads are not inspectable with
curlor browser dev tools. Debugging requires specialized tools likegrpcurlor Bloom RPC. - Ecosystem maturity. REST has decades of tooling: Postman, Swagger, API gateways, rate limiters. gRPC tooling is growing but not at the same level.
- Learning curve. Teams must learn Protocol Buffers, proto3 syntax, code generation pipelines, and gRPC-specific error handling patterns.
Comparison table
| Criteria | REST | GraphQL | gRPC |
|---|---|---|---|
| Transport | HTTP/1.1 or HTTP/2 | HTTP/1.1 or HTTP/2 | HTTP/2 (required) |
| Serialization | JSON (text) | JSON (text) | Protocol Buffers (binary) |
| Latency (typical) | 50-200ms | 50-300ms | 10-50ms |
| HTTP caching | Native (GET + cache headers) | Requires persisted queries | Not applicable |
| Browser support | Full | Full | Via grpc-web proxy only |
| Streaming | SSE, WebSockets (separate) | Subscriptions (separate) | Built-in (4 modes) |
| Schema / contract | OpenAPI (optional) | GraphQL SDL (required) | .proto files (required) |
| Code generation | Optional (openapi-generator) | Common (graphql-codegen) | Required (protoc) |
| Learning curve | Low | Medium | High |
| Debugging | curl, browser, Postman | GraphiQL, Altair, Postman | grpcurl, Bloom RPC |
| Primary use case | Public APIs, CRUD | Client-driven queries | Internal microservices |
Real-world decision examples
Stripe: REST for payments
Stripe processes billions of dollars in payments through a REST API. Their endpoints follow predictable
patterns: POST /v1/payment_intents, GET /v1/charges/:id. Every developer
who integrates Stripe knows HTTP. The onboarding friction is close to zero. Stripe chose REST because
their audience is external developers who need stable, documented, cacheable endpoints.
GitHub: GraphQL for developer tools
GitHub migrated from REST (v3) to GraphQL (v4) because their clients (desktop apps, mobile apps, third-party integrations) all needed different data from the same objects. A CI tool needs commit status and check runs. A project management app needs issues, labels, and assignees. A mobile app needs a minimal profile view. One REST endpoint could not serve all three without massive over-fetching.
Google: gRPC for internal services
Google built gRPC (the "g" stands for a different word each release) to handle internal service-to-service communication at scale. When your service mesh processes millions of RPCs per second, the difference between JSON text parsing and Protocol Buffer binary deserialization matters. Google chose gRPC because the audience is internal, the contracts are strict, and throughput is the primary constraint.
Why botoi chose REST for 150+ endpoints
botoi's API serves independent utility endpoints: DNS lookups, email validation, JSON formatting, QR code generation, hash computation. Each endpoint takes a specific input and returns a specific output. There is no relational data graph connecting a DNS record to a QR code.
Three factors made REST the clear choice:
- Universal client support. Developers call botoi from Node.js, Python, Go, Ruby, PHP, shell scripts, and AI agents. REST works in all of them with zero setup.
- Cacheability. GET endpoints for static resources (like country lookups or currency lists) benefit from HTTP caching at the CDN layer. This keeps response times under 20ms for repeat requests.
- Discoverability. Each endpoint has a stable URL, an OpenAPI spec entry, and interactive docs via Scalar. New developers find and test endpoints in under a minute.
GraphQL would add complexity without benefit. There is no query graph to traverse. gRPC would exclude browser clients and shell scripts. REST is the right tool for this shape of problem.
Mixing protocols in one system
The framework applies per boundary, not per organization. Many production systems combine protocols:
- External API layer: REST. Third-party developers and webhooks expect HTTP + JSON.
- Client-facing gateway: GraphQL. Mobile and web clients query a gateway that aggregates data from multiple services.
- Internal service mesh: gRPC. Backend services communicate with binary payloads and strict contracts.
This is not complexity for complexity's sake. Each boundary has a different audience with different constraints. The protocol should match the constraint, not the other way around.
Decision checklist
Copy this into your design doc. Answer each question, and the protocol choice becomes obvious.
- Who are the API consumers? External developers (REST), your own frontend team (GraphQL), internal services (gRPC).
- How many data shapes do clients request? One shape (REST), many shapes (GraphQL), fixed contracts (gRPC).
- Does HTTP caching matter? Yes (REST), sometimes (GraphQL with effort), no (gRPC).
- Do you need streaming? No (REST is fine), subscriptions (GraphQL), bidirectional (gRPC).
- What languages do clients use? Everything (REST), JS/TS-heavy (GraphQL tooling is strongest here), polyglot with code generation (gRPC).
- What is the team's current expertise? If nobody knows Protocol Buffers, gRPC has a steep ramp-up cost. If nobody knows GraphQL resolvers, expect a month of learning before production readiness.
If you answered "external developers" to question 1, stop here. Use REST. The other questions become relevant only when the audience is internal or when you control both client and server.
Common mistakes to avoid
- Choosing GraphQL because it feels new. GraphQL adds resolver complexity, query cost analysis, and N+1 mitigation. If your API has 10 CRUD endpoints with fixed shapes, REST does the same job with less code.
- Choosing gRPC for a public API. Your users cannot call gRPC from a browser, from curl, or from a low-code tool. You will end up building a REST gateway in front of it anyway.
- Choosing REST for a complex data graph. If your frontend team asks for 5 new "summary" endpoints per sprint because the existing ones return too much or too little data, that is a sign GraphQL would reduce coordination overhead.
- Ignoring team expertise. The fastest protocol to ship is the one your team already knows. A team fluent in REST that switches to gRPC will spend weeks on tooling before writing business logic.
Frequently asked questions
- When should I pick GraphQL over REST?
- Pick GraphQL when your clients need to request different shapes of data from the same backend. Mobile apps that must minimize payload size and dashboards that aggregate data from multiple domain objects both benefit from client-driven queries. If every client sends the same request, REST is simpler.
- Is gRPC faster than REST?
- gRPC uses HTTP/2 multiplexing and Protocol Buffers binary serialization, so it transfers smaller payloads with lower latency than JSON over HTTP/1.1. In benchmarks, gRPC typically processes 2-10x more requests per second than equivalent REST endpoints. The gap narrows when REST also runs on HTTP/2 with a compact format like MessagePack.
- Can I use gRPC in a browser?
- Not directly. Browsers do not expose the HTTP/2 framing that gRPC requires. grpc-web is a proxy layer that translates between the browser and a gRPC backend, but it adds latency and operational overhead. For browser clients, REST or GraphQL remain the practical choices.
- Why does botoi use REST instead of GraphQL?
- botoi serves 150+ independent utility endpoints, each with a single request shape and a single response shape. There is no relational data graph to traverse. REST gives every endpoint a stable, cacheable URL. Developers can test any endpoint with a single curl command and no query language to learn.
- Can I combine REST, GraphQL, and gRPC in one system?
- Yes. Many teams run gRPC between internal microservices for speed, expose a GraphQL gateway for mobile and web clients, and keep REST for public integrations and webhooks. The decision framework applies per boundary, not per organization.
More guide posts
Start building with botoi
150+ API endpoints for lookup, text processing, image generation, and developer utilities. Free tier, no credit card.