The Cost of a Monorepo, Honestly
The monorepo discourse is exhausting because both sides argue from their own context and assume it generalizes. The Google and Meta engineers say monorepo is obviously correct — and for their scale, with their custom tooling, it genuinely is. The startup engineer who just struggled through a Bazel migration for 18 months says monorepo is obviously wrong — and for their team of 12, it probably was. The honest answer is that monorepos solve a specific category of coordination problem, create a specific category of tooling problem, and whether the tradeoff is worth it depends on factors most blog posts don't address directly.
I've worked in both models at meaningful scale. This post is a practitioner's accounting of the real costs — not the theoretical ones — with opinions on when the math actually works out.
What Problem Monorepos Actually Solve
Let's be precise. Monorepos don't make your code better. They don't make your tests faster. They solve two specific coordination problems:
Atomic cross-package changes. When library auth-utils changes its interface, every service that depends on it needs to update in the same commit. In a polyrepo, this requires coordinating PRs across multiple repositories — either a breaking change that breaks downstream repos for hours, or a phased rollout that requires maintaining backward compatibility. In a monorepo, you change the interface and update every call site in one commit. The change is always consistent.
Unified dependency versioning. In a polyrepo, Service A may depend on lodash@4.17.15 and Service B on lodash@4.17.21. As packages multiply, the version matrix becomes unmanageable and security patching becomes a manual coordination exercise across dozens of repos. A monorepo with a single root package.json (or a single pom.xml BOM) has one version per package, enforced universally.
If your polyrepo doesn't have these problems — because your services are genuinely independent and rarely share code — the monorepo's benefits are largely theoretical for you.
The Real Tooling Cost
Here is what the monorepo advocates under-sell: the tooling investment is substantial, ongoing, and requires dedicated ownership. This is not a one-time migration cost. It's a recurring engineering tax.
Build Orchestration
In a polyrepo, you run npm test and it runs your tests. In a monorepo with 40 packages, you need an orchestrator that understands the dependency graph among packages, runs only the affected packages when something changes, and caches build outputs so you're not rebuilding the world on every PR.
Turborepo is the current standard for JavaScript/TypeScript monorepos:
// turbo.json
{
"$schema": "https://turbo.build/schema.json",
"pipeline": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**", ".next/**"]
},
"test": {
"dependsOn": ["build"],
"outputs": ["coverage/**"],
"cache": true
},
"lint": {
"outputs": []
},
"dev": {
"cache": false,
"persistent": true
}
}
}# Only build and test packages affected by changes to main
npx turbo run test --filter=[HEAD^1]Nx takes a similar approach with a richer plugin ecosystem:
// nx.json
{
"targetDefaults": {
"build": {
"dependsOn": ["^build"],
"cache": true
},
"test": {
"cache": true
}
},
"affected": {
"defaultBase": "main"
}
}These tools work well when they work. When they don't — incorrect dependency graphs, cache invalidation bugs, executor misconfigurations — debugging them requires deep knowledge of the tool internals. That knowledge lives in 1–2 engineers on the team. When those engineers leave, the monorepo becomes opaque infrastructure that everyone is afraid to touch.
The Bazel Tax
At very large scale, Turborepo and Nx hit limits. Google, Meta, and Shopify use Bazel (or Buck, or Pants) because they need hermetic, reproducible builds with remote execution. Bazel is genuinely powerful and genuinely painful:
# BUILD file — Bazel
java_library(
name = "payments",
srcs = glob(["src/main/java/com/example/payments/**/*.java"]),
deps = [
"//packages/money:money_lib",
"//packages/validation:validation_lib",
"@maven//:com_google_guava_guava",
],
visibility = ["//visibility:public"],
)
java_test(
name = "payments_test",
srcs = glob(["src/test/java/com/example/payments/**/*.java"]),
deps = [
":payments",
"@maven//:junit_junit",
"@maven//:org_mockito_mockito_core",
],
)Bazel's learning curve is steep, its error messages are cryptic, and migrating an existing Maven or Gradle project to Bazel typically takes 6–18 months for a mid-size codebase. Unless you're at a scale where CI costs and build times are genuinely limiting factors — typically 200+ engineers — the Bazel investment doesn't pay off.
Code Search and IDE Performance
A monorepo with 500,000 lines of code renders most IDE features slow. Go-to-definition takes seconds. File search times out. TypeScript's language server runs out of memory and silently stops providing completions.
This is not a theoretical concern. In practice, teams solve it by configuring their IDE to load only a subset of the monorepo:
// .vscode/settings.json — workspace-level settings
{
"files.watcherExclude": {
"**/node_modules/**": true,
"**/dist/**": true,
"apps/service-b/**": true, // exclude services I'm not working on
"apps/service-c/**": true
},
"typescript.preferences.includePackageJsonAutoImports": "off",
"typescript.tsserver.maxTsServerMemory": 8192
}For TypeScript specifically, the references feature in tsconfig lets the language server understand the package graph without loading every file:
// apps/service-a/tsconfig.json
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"outDir": "./dist"
},
"references": [
{ "path": "../../packages/auth-utils" },
{ "path": "../../packages/shared-types" }
]
}Even with these optimizations, large monorepo IDE performance is noticeably worse than a focused polyrepo. Engineers who work in one service all day feel this as friction. Engineers who routinely make cross-cutting changes feel it less — they get the benefit of the atomic commit while paying the IDE cost.
Code Ownership at Scale
In a polyrepo, ownership is implicit: the team that owns the repository owns the code. Simple. In a monorepo, ownership is a tool problem. You need CODEOWNERS (or its equivalent) that maps paths to teams, and you need discipline about maintaining it as the org evolves.
# CODEOWNERS
/apps/payments/ @payments-team
/apps/billing/ @billing-team
/packages/auth-utils/ @platform-team
/packages/shared-types/ @platform-team
/packages/ui/ @design-systems-team
/infra/ @devops-teamThe failure mode: CODEOWNERS falls out of sync with reality as teams reorganize. A team that disbanded 6 months ago is still listed as the owner of 3 packages. PRs route to the wrong reviewers. Nobody owns the seams between packages. A monorepo without maintained ownership is more chaotic than a polyrepo, because the scope of "everything" is enormous.
Audit your CODEOWNERS quarterly. Treat unmaintained ownership as a P2 issue.
When a Polyrepo Is Still Right
Given all of the above, there are scenarios where polyrepo is the correct answer and switching to monorepo would be a net negative:
Genuinely independent services. If Service A and Service B share no code, have no shared dependencies, and are deployed by different teams with no coordination — there is nothing the monorepo gives you. You'd pay all the tooling costs and get none of the atomic commit benefits.
Small teams. Under 20 engineers, the coordination problems a monorepo solves are manageable with discipline. The tooling cost of setting up and maintaining Turborepo or Nx (let alone Bazel) is proportionally enormous for a small team.
Different technology stacks. A monorepo with a Go backend and a TypeScript frontend works, but you lose the economies of scale on build tooling — each ecosystem needs its own orchestration, and your CI config becomes a layered mess.
Compliance and access control requirements. Some organizations need strict access controls at the repository level — not every engineer should be able to read source code for every service. Monorepos make per-file access control complex and fragile. If this is a hard requirement, polyrepo wins.
When Monorepo Is Worth It
Monorepo pays off when:
- Multiple teams routinely need to make atomic changes across package boundaries
- You have shared libraries that all services depend on and that change frequently
- You have dedicated platform engineering capacity to maintain the build tooling (at minimum one engineer who owns it as a primary responsibility)
- Your services share a primary technology stack, so the build tooling has meaningful economies of scale
If all four conditions are true, monorepo is almost certainly the right call. If fewer than three are true, evaluate carefully.
The Migration Decision
If you're considering migrating from polyrepo to monorepo, the decision framework:
The migration itself deserves its own post. The short version: do it package-by-package, not all-at-once. Start with your shared library packages, get the build tooling right there, then add services one at a time. An all-at-once migration creates a 6-month window where everything is broken and nobody can ship.
Key Takeaways
- Monorepos solve atomic cross-package changes and unified dependency versioning — if your polyrepo doesn't have those problems, the monorepo's benefits are mostly theoretical for your context.
- The tooling cost is real and ongoing: build orchestration, incremental execution, cache management, and IDE performance all require dedicated engineering attention that doesn't go away after the initial migration.
- Turborepo and Nx are practical for JavaScript/TypeScript monorepos at startup-to-mid-scale; Bazel is justified only when CI costs and build times are actively limiting at 200+ engineer scale.
- Code ownership in a monorepo requires explicit tooling and quarterly audits — the implicit ownership model of polyrepo does not transfer, and unmaintained
CODEOWNERScreates more chaos than no ownership file at all. - Polyrepo is still the right answer for genuinely independent services, small teams, mixed technology stacks, and organizations with hard repository-level access control requirements.
- If you decide to migrate, go package-by-package: establish the build tooling on shared libraries first and add services incrementally over months, not weeks.