I’ve been working on a handful of personal projects over the past couple of years, and as the number of projects grew, I found myself increasingly frustrated with managing them as separate repositories. Switching between repos, duplicating configuration, and maintaining consistency across projects was getting tedious. Coming from my time at Google, where everything lives in a single monorepo and the tooling is built around that assumption, I had a strong intuition that consolidating my projects into a single workspace would pay dividends - even at the scale of a single developer.

This post describes how I’ve set up my personal monorepo, and the rationale behind each decision.

The Problem with Separate Repos

When each project lives in its own repository, a few things start to go wrong once you have more than a couple of them:

  • Configuration drift. You set up your editor settings, your CI pipelines, your linter config, and your AI agent instructions in one project. Then you do it again in the next project. And again. Before long, each project has a subtly different setup, and you’re spending time keeping them in sync rather than building things.
  • Context switching overhead. Each repo has its own shell, its own git state, its own working tree. Jumping between them means re-orienting yourself every time.
  • Shared tooling is hard. If you have a common set of build rules, agent skills, or dev container configurations, there’s no clean way to share them across separate repos without publishing packages or maintaining submodules in each one.

The monorepo solves all of these problems by putting everything under one roof, with shared configuration at the top level.

Google’s repo Tool: The Glue

Rather than using a single massive Git repository (which has its own set of problems at smaller scales - GitHub doesn’t love enormous repos), I opted for Google’s repo tool - the same tool used to manage Android’s source tree. The repo tool sits on top of Git and manages a collection of Git repositories as a single workspace. Each project is still its own Git repo with its own history, branches, and remotes; but repo coordinates checking them all out into a single directory tree, and keeps them in sync.

The setup is driven by a manifest file that lives in its own Git repository. Running three commands gets you the entire workspace:

repo init -u https://github.com/bsubrama/repo-manifest.git
repo sync -j4 --fail-fast --fetch-submodules
repo forall -c "git checkout main"

That last command is important: repo sync leaves each project in a detached HEAD state, and git checkout main reattaches to the main branch so that you can actually commit and push.

The manifest tracks all of the projects, including the top-level configuration directories (.claude/, .agents/, .vscode/, etc.), and sets up symlinks wherever needed, so the entire workspace is reproducible from a single repo sync.

What Lives in the Workspace

The workspace currently houses five projects, each at a different stage of maturity, one of which is this blog: Hugo-powered static site deployed to Firebase. The simplest project in the workspace, and deliberately so - it’s just content, a theme, and a deployment pipeline. I wrote about the setup in a previous post.

Shared Infrastructure at the Top Level

The top-level directories contain all the configuration that’s shared across projects:

.agents/ - AI Agent Skills

This is one of the more interesting pieces of the setup. I’ve collected agent skills from three sources - Anthropic’s skill library, Obra AI’s “superpowers” collection, and a set of GitHub Copilot-oriented skills - and symlinked them into a single directory. The repo tool’s manifest handles the symlink setup, so every project in the workspace has access to the same set of skills.

The skills cover a wide range: from systematic debugging and brainstorming workflows, to git commit conventions, GitHub CLI references, and frontend design patterns. Having these available across all projects means that regardless of which project I’m working in, the AI agent has the same set of capabilities and follows the same conventions.

.claude/ and .gemini/ - AI Configuration

Both Claude Code and Gemini have their own configuration directories, each pointing to the shared agent skills. This means I can use either tool interchangeably across the workspace without losing access to any skills or conventions.

.devcontainer/ and .vscode/

Standard dev container and editor configuration, shared across the workspace. Nothing groundbreaking here, but it’s nice to have a single source of truth for editor settings rather than duplicating them in each project.

Consistency Through Convention

One of the things I’m most pleased with is the consistency across projects. Every project follows the same documentation structure:

Document Purpose
CLAUDE.md AI agent guidance - what to know, how to build, what conventions to follow
CONSTITUTION.md Non-negotiable engineering principles and tech stack decisions
ARCHITECTURE.md System design and technical decisions
DEVELOPMENT.md Developer setup and build commands
SPEC.md / REQUIREMENTS.md Product requirements and user stories

The CLAUDE.md files encode everything an AI agent needs to know to work effectively in the project: the build system, the testing conventions, the code organization patterns, the deployment process. It’s a significant upfront investment, but it pays off every time I start a new session and the agent immediately knows how things work.

Similarly, every project uses the same core technology stack: Python for backends, Bazel for builds, Protocol Buffers for communication protocols (whether gRPC or Connect-RPC), and Supabase where a managed database is needed. This consistency means that patterns learned in one project transfer directly to another, and shared build rules or configurations don’t need to be adapted per-project.

Was It Worth It?

Unequivocally, yes. The upfront cost of setting up the manifest, configuring the repo tool, and organizing the shared infrastructure was a one-time investment. The ongoing benefits - consistent tooling, shared agent skills, a single workspace to navigate, reproducible setup from a single command - compound over time. Every new project I add to the workspace immediately inherits all of the shared configuration, and I spend my time building things rather than setting things up.

If you’re managing more than a couple of personal projects and find yourself duplicating configuration across them, I’d strongly recommend considering a similar approach. Google’s repo tool is lightweight, well-documented, and does exactly one thing well: coordinating multiple Git repositories into a single workspace. The rest - the shared agent skills, the documentation conventions, the consistent tech stack - is just good engineering discipline applied at the workspace level.