ai / agents

An agent is a software program that runs tools in a loop to achieve a goal.

Tools are things like web search, reading files, and writing files. You state the goal, the agent decides which tools to use, observes results, and decides what to do next, until the goal is achieved.

I run agents in Warp.

Lower friction for maintenance

Modern agents handle a lot of the mechanical work: adding features, refactoring, applying security patches, fixing bugs. They are fast, but they are not a substitute for design or judgment.

I pass new error backtraces to an agent, which diagnoses and fixes them. We used to deprioritize the long tail of issues; now we knock them out. Our Sentry backlog has been at zero for most of 2026.

Bugs and technical debt can be addressed faster than before. Maintenance is cheaper, so there's less excuse to let things rot, and refactors are less daunting. Architecture is still bounded by what the team understands, not by code-generation speed.

When the accidental work of building is cheaper, the build-vs-buy line moves.

Humans in the loop

Agents now generate most of my team's code. This shifts work toward review and supervision, with real risks (skill atrophy, loss of understanding) that I cover in code review.

We are often humans in the loop: we write the instructions, we review every change, give feedback, and otherwise generally follow our traditional software development lifecycle: git branches, GitHub pull requests, CI checks, peer review from humans (and now agents), merge, deploy.

Humans on the loop

We are also humans on the loop: we design the loop itself. Tests, CI, server logs, error tracking: each is a feedback loop we tighten around the agent. Run the tests, run security scans, apply coding guidelines, fix what fails, run them again.

Our job is increasingly to design the environment itself. Tight loops catch a class of mechanical errors, but not design or specification errors; those still need human attention.

Specifically, with more code being generated, checks run more often locally and in CI. The checks themselves must be fast. cibot helps us start CI runs fast (a second or two after push) and its output is simple plain text: you can paste back to the agent when a check fails.

Similarly, we check AGENTS.md files into the repo to instruct agents. They are version-controlled and scoped to the directory they live in.

A root AGENTS.md covers architecture, conventions, and quick reference. Subdirectory files (db/AGENTS.md, test/AGENTS.md, ui/AGENTS.md) have domain-specific rules so the agent gets focused context for the area it's working in.

What goes in AGENTS.md

Patterns I've found useful:

Quick reference. Runnable commands for common tasks: build, test, lint, migrate.

Safety rails. Explicit rules like "never commit secrets."

Conditional checks. "Run X if Y files touched." This keeps feedback fast. No need to run the SQL linter if you only changed Go files.

Style guides. Commit message conventions, naming patterns. Link to examples: "look at git log for commit style."

Environment context. Available tools, port numbers, file paths. Context the agent needs to run commands correctly.

What comes next?

Today, I often have multiple agents running simultaneously in separate Warp tabs, inside git worktrees.

I wonder if the next step may be orchestrating parallel agents in the cloud?

Project boards like Linear, Notion, or Trello already track what we need to build. Agent platforms like Warp Oz already run agents in the cloud.

Connect the two and the board becomes the control plane?

← All articles