Skip to main content
Architecture

Backends-for-Frontends, Ten Years Later

Ravinder··12 min read
ArchitectureBFFGraphQLAPI DesignMicroservices
Share:
Backends-for-Frontends, Ten Years Later

In 2015, Sam Newman published a short blog post about a pattern he and his colleagues had been using at SoundCloud: a dedicated backend per frontend client. He called it Backend for Frontend — BFF. The pattern addressed a specific pain point: the mobile team kept fighting the desktop team over the shape of shared API responses, and both teams kept fighting with the services team over how fast they could get changes shipped.

Ten years later, I still reach for BFF in probably a third of the architectures I design. Sometimes because GraphQL is the right answer and I need to explain why it is not for this particular situation. Sometimes because the team already made the GraphQL choice and the problems are now visible in production. And sometimes because the situation is a textbook BFF case that nobody called by name yet.

This post is the full picture: what BFF actually solves, where GraphQL genuinely replaces it, the anti-patterns that turn BFF into a distributed monolith, and the mobile/web split decision.

The Original Problem BFF Solves

The pain point is familiar. You have a set of microservices that own your data. You have two or more frontends — a web app, an iOS app, an Android app, maybe a voice assistant or a watch extension. Each frontend has different data requirements for the same conceptual operations.

The web dashboard for an e-commerce platform needs a product listing with full descriptions, reviews, and inventory counts. The mobile app needs the same products but with image URLs optimized for mobile dimensions, shorter descriptions, and no reviews (they are below the fold). The iOS app also needs to know if Apple Pay is available for each product. Android needs a different payment flag.

With a shared API, you have three options, all bad:

  1. Return everything and let clients filter. Wastes bandwidth, especially on mobile. The response shape becomes a superset of everyone's needs, which is owned by nobody.
  2. Add frontend-specific query parameters to shared services. The services grow frontend-specific logic. Now every service is partially a frontend concern.
  3. Create a shared API layer that aggregates. This API layer slowly accumulates knowledge of every frontend's needs. It becomes the fourth frontend problem.

BFF is option 4: a thin service, owned by the frontend team, that aggregates and transforms upstream service responses into exactly what that frontend needs.

flowchart TD WebApp["Web App"] --> WebBFF["Web BFF"] iOSApp["iOS App"] --> MobileBFF["Mobile BFF"] AndroidApp["Android App"] --> MobileBFF WebBFF --> ProductSvc["Product Service"] WebBFF --> ReviewSvc["Review Service"] WebBFF --> InventorySvc["Inventory Service"] MobileBFF --> ProductSvc MobileBFF --> InventorySvc MobileBFF --> PaymentSvc["Payment Service"] style WebBFF fill:#4f46e5,color:#fff style MobileBFF fill:#059669,color:#fff

What BFF Is Not

BFF is not a general-purpose API gateway. It does not handle authentication at the perimeter, rate limiting across all traffic, or load balancing to upstream services. Those are infrastructure concerns that belong in the actual API gateway or service mesh.

BFF is not a microservice with its own database. The moment your BFF starts persisting data, it has become a service. That is fine — but name it honestly and govern it as a service, not as a thin aggregation layer.

BFF is not a replacement for service-level APIs. Upstream services should still expose well-designed, stable APIs. The BFF consumes them. If the BFF is calling internal service methods directly (RPC calls into private methods, not published APIs), you have a coupling problem, not a BFF.

GraphQL as a BFF Replacement

GraphQL solves the same data-fetching problem that BFF solves, using a different mechanism. Instead of separate backend services shaped for each client, GraphQL provides a single endpoint where clients declare their own data requirements.

# Mobile client asks for exactly what it needs
query MobileProductListing($ids: [ID!]!) {
  products(ids: $ids) {
    id
    name
    mobileImageUrl
    shortDescription
    applePay
  }
}
 
# Web client asks for its own shape
query WebProductListing($ids: [ID!]!) {
  products(ids: $ids) {
    id
    name
    fullDescription
    reviews(limit: 5) { author rating text }
    inventoryCount
  }
}

GraphQL genuinely wins in several scenarios:

Rapidly evolving product with a single, controlled client team. If the same team owns the server and all clients, GraphQL's schema evolution story is excellent. Deprecated fields stay in the schema, new fields appear without versioning ceremonies, and the introspection tooling (GraphiQL, Rover) makes the API self-documenting.

Many diverse clients with unpredictable data needs. A platform that serves a partner ecosystem — where you cannot anticipate every client's data requirements — benefits from GraphQL's flexibility. You expose a schema; clients compose their own queries.

Read-heavy APIs where field selection reduces backend load. GraphQL resolvers can be lazy. If a client does not request reviews, the review resolver never runs. At scale this is a meaningful efficiency.

When BFF Still Beats GraphQL

GraphQL has a set of problems that are genuinely hard and that BFF avoids.

N+1 query problem. Without DataLoader-style batching, a GraphQL resolver that fetches a list of products and then fetches reviews for each product makes O(n) review requests for n products. Implementing DataLoader correctly is non-trivial and requires every resolver author to think about batching. BFF developers write explicit aggregation code; the N+1 problem is harder to accidentally create.

Authorization at field level is complex. GraphQL field-level authorization (this user can see inventoryCount but not wholesalePrice) requires either schema directives or resolver-level checks on every sensitive field. A BFF handles this naturally: the BFF knows who the caller is and simply does not include fields the caller is not entitled to.

Mutations are messy with complex side effects. GraphQL mutations that trigger multi-step workflows, need orchestration across multiple services, or need to maintain transactional behavior do not map cleanly to the mutation resolver model. BFF can implement explicit workflow logic as a regular function call.

You need response caching at the HTTP layer. GET requests are cacheable. GraphQL mutations are POST. GraphQL queries are also typically POST (for request body support), which means you cannot cache them at the CDN or reverse proxy layer without custom extensions. BFF exposes regular REST or RPC endpoints that are cacheable with standard HTTP.

The team does not have GraphQL expertise. GraphQL's apparent simplicity is deceptive. Running a performant, secure GraphQL API in production requires deep expertise in schema design, resolver optimization, persisted queries, depth limiting, and complexity analysis. If the team does not have that expertise, shipping a GraphQL API is adding a new class of operational risk.

Mobile vs Web Split

The most common BFF topology is one BFF per client platform: one for web, one for mobile. Sometimes one for iOS and one for Android if the platforms have meaningfully different needs (which Apple Pay vs Google Pay often makes true).

The rule I use: split BFFs when the data shapes or the release cadence diverge.

Mobile apps have an additional constraint that web apps do not: deployed versions are long-lived. An iOS app version submitted today may still be in use twelve months from now. The mobile BFF must maintain backward compatibility for deployed app versions. The web BFF can break freely because web deployments are atomic.

This creates two genuinely different contracts:

flowchart LR subgraph Mobile["Mobile BFF Constraints"] direction TB M1["Must support v1.x through v3.x app"] M2["Additive changes only between versions"] M3["Sunset tied to App Store adoption curves"] end subgraph Web["Web BFF Constraints"] direction TB W1["Single current version always"] W2["Breaking changes on deploy"] W3["No backward compat required"] end

If you share a BFF between mobile and web, you inherit mobile's versioning constraints everywhere. That is almost always the wrong tradeoff.

BFF Anti-Patterns

The BFF that knows too much. The BFF starts hosting business logic: pricing calculations, discount eligibility, recommendation ranking. These are service concerns. Once a BFF owns business logic, you have created a service that happens to be called a BFF. Every other frontend now needs to duplicate or import that logic. The BFF should be thin: aggregate, transform, and return. Business logic belongs upstream.

One BFF for everything. Some teams create a single BFF and route all clients through it. This defeats the purpose. You now have a single service with the union of every client's concerns. The benefit of BFF — that the frontend team owns the full stack for their surface — evaporates when every team is touching the same file.

BFF with a database. The BFF should be stateless and thin. When a BFF starts materializing views in a local database, it has become a read model service. That may be correct, but it brings ownership, migration, and reliability concerns that should be handled explicitly, not smuggled in through the BFF back door.

BFF as an API gateway. Auth, rate limiting, routing, TLS termination — these do not belong in the BFF. They belong in the actual gateway layer. A BFF that does gateway work is doubling the surface area that needs to be secured and scaled.

Ownership Model

The BFF pattern's real value is not technical. It is organizational. The pattern works because it aligns ownership:

Frontend team owns:
  - The client application
  - The BFF that serves it
  - The contract between client and BFF
 
Services team owns:
  - Upstream service APIs
  - Data models and business logic
 
The BFF boundary is the negotiation point.

When a frontend team needs a new field, they add it to their BFF first. If the upstream service already provides it, the BFF surfaces it. If not, the frontend team writes a ticket against the services team with a clear, well-scoped requirement: "we need field X from service Y." The services team does not need to understand every frontend's rendering requirements. The frontend team does not need to wait for a shared API team to prioritize their needs.

This organizational clarity is why BFF survives even in environments that have adopted GraphQL. A GraphQL federation layer is still, effectively, a BFF. It just speaks GraphQL instead of REST.

Implementing a BFF: What the Code Actually Looks Like

A BFF is typically a small Node.js, Go, or Python service. It receives requests from the client, calls upstream services in parallel where possible, assembles a response, and returns it. The distinguishing characteristic is that it is shaped entirely by the client's needs — not by what upstream services happen to expose.

Here is a product listing endpoint for the web BFF. It calls three services concurrently and merges their responses:

import { FastifyInstance } from "fastify";
import { ProductService } from "./services/product";
import { ReviewService } from "./services/review";
import { InventoryService } from "./services/inventory";
 
export async function registerProductRoutes(app: FastifyInstance) {
  app.get<{ Params: { id: string } }>("/products/:id", async (request, reply) => {
    const { id } = request.params;
 
    // Fan out to upstream services in parallel
    const [product, reviews, inventory] = await Promise.all([
      ProductService.getById(id),
      ReviewService.getForProduct(id, { limit: 5 }),
      InventoryService.getCount(id),
    ]);
 
    if (!product) {
      return reply.status(404).send({ error: "Product not found" });
    }
 
    // Shape the response to exactly what the web UI needs
    return reply.send({
      id: product.id,
      name: product.name,
      fullDescription: product.description,
      imageUrl: product.images.desktop,
      inventoryCount: inventory.count,
      inStock: inventory.count > 0,
      reviews: reviews.map((r) => ({
        author: r.authorName,
        rating: r.starRating,
        text: r.body,
        createdAt: r.createdAt,
      })),
    });
  });
}

The mobile BFF endpoint for the same product looks different:

app.get<{ Params: { id: string } }>("/products/:id", async (request, reply) => {
  const { id } = request.params;
  const platform = request.headers["x-platform"] as string; // "ios" | "android"
 
  const [product, inventory, payment] = await Promise.all([
    ProductService.getById(id),
    InventoryService.getCount(id),
    PaymentService.getAvailableMethods(platform),
  ]);
 
  return reply.send({
    id: product.id,
    name: product.name,
    shortDescription: product.shortDescription,      // shorter field
    imageUrl: product.images.mobile,                 // mobile-sized image
    inStock: inventory.count > 0,                    // boolean, not count
    applePay: payment.methods.includes("apple_pay"), // iOS-specific
    googlePay: payment.methods.includes("google_pay"),
  });
});

Note that the mobile BFF does not call ReviewService at all — saving that network round trip and the associated latency for a field the mobile UI does not render.

Testing a BFF

Because a BFF is a thin aggregation layer, its tests have a different shape than service tests.

Unit tests cover transformation logic: the mapping from upstream service response shapes to the client response shape. These are pure functions and should be fast.

Integration tests mock the upstream service clients and verify that the BFF correctly fans out, handles partial failures (one service down), and applies the correct transformation. Use contract testing (Pact or similar) to ensure the upstream mock responses stay in sync with what the real services actually return.

End-to-end tests run against a full stack in a staging environment and verify actual response shapes from the perspective of the client. These are your regression safety net after upstream service changes.

The BFF's observability story is important. Instrument every upstream service call with latency histograms, error rates, and timeout counts. The BFF is your window into upstream service health from the perspective of the frontend team. A spike in review_service_latency_ms in the web BFF logs is often the first place that latency degradation is visible.

flowchart LR BFF["Web BFF"] -->|"product_service_duration_ms"| Metrics["Prometheus"] BFF -->|"review_service_duration_ms"| Metrics BFF -->|"inventory_service_duration_ms"| Metrics Metrics --> Dashboard["Grafana Dashboard\n(owned by frontend team)"] style BFF fill:#4f46e5,color:#fff style Dashboard fill:#059669,color:#fff

This is the operational benefit that is often undersold: when the BFF team owns the observability of upstream service calls, frontend engineers can debug latency issues and service degradation without filing tickets against the services team.

Key Takeaways

  • BFF remains relevant a decade after its naming because it solves an organizational problem — frontend team autonomy — that GraphQL federation and API gateways partially but not fully address.
  • GraphQL wins when clients have unpredictable, diverse, or rapidly evolving data needs and the team has the expertise to run it correctly.
  • BFF wins when authorization is field-level, when HTTP caching is required, when mutation flows have complex side effects, or when the team lacks GraphQL expertise.
  • Split BFFs along client platform lines, not feature lines — mobile's versioning constraints differ fundamentally from web's.
  • The BFF anti-patterns (logic accumulation, shared single BFF, stateful BFF, BFF-as-gateway) all stem from the same cause: scope creep away from the core job of aggregation and transformation.
  • The real value of BFF is alignment of ownership: the frontend team controls their full stack end-to-end, making them fast without making the services team absorb frontend concerns.