go / sqlc
I use sqlc to generate type-safe Go from SQL queries.
Write SQL, run sqlc generate, get functions with proper types.
Setup
Create sqlc.yaml at the project root:
version: "2"
sql:
- schema: "db/migrate/"
queries: "db/queries/"
engine: postgresql
gen:
go:
package: pgdb
out: pgdb/
sql_package: pgx/v5
emit_json_tags: true
emit_empty_slices: true
overrides:
- db_type: "pg_catalog.timestamptz"
nullable: false
go_type: "time.Time"
- db_type: "pg_catalog.timestamp"
nullable: false
go_type: "time.Time"
- db_type: "pg_catalog.date"
nullable: false
go_type: "time.Time"
This tells sqlc to:
- Read schema from migration files in
db/migrate/ - Read queries from
db/queries/ - Generate Go code in the
pgdb/package - Use pgx/v5 as the database driver
- Override Postgres timestamp types to plain
time.Time(the defaultpgtype.Timestamptzis awkward at call sites)
Schema source: migrations or live DB
The schema: key has two practical shapes.
Migration directory (shown above): sqlc parses
db/migrate/*.sql in lexical order to build the schema. Simple,
no external dependency. Works when migrations are pure SQL the
parser can follow.
Single dumped schema + live DB (richer type inference):
schema: "db/schema_go.sql"
database:
uri: "postgres://postgres:postgres@localhost:5432/myapp_development"
Pair with a regenerator script that runs pg_dump --schema-only
into db/schema_go.sql after each migration. The database.uri
gives sqlc a live connection to type-check queries against the
real database, which catches cases the parser alone misses
(materialized views, generated columns, extension-defined types).
The migration-directory shape is simpler. The schema_go + live DB shape gives more inference at the cost of a regenerator script and a running dev DB.
Writing queries
Put queries in db/queries/*.sql with sqlc annotations:
-- name: GetUser :one
SELECT id, email, created_at
FROM users
WHERE id = $1;
-- name: ListUsers :many
SELECT id, email, created_at
FROM users
ORDER BY created_at DESC
LIMIT $1;
-- name: CreateUser :exec
INSERT INTO users (email)
VALUES ($1);
-- name: UpdateEmail :exec
UPDATE users
SET email = $2
WHERE id = $1;
The annotation format is -- name: FunctionName :return_type.
Return types:
:one: returns a single row (error if not found):many: returns a slice:exec: returns only error (INSERT, UPDATE, DELETE):execrows: returns affected row count
Generated code
Run sqlc generate to create Go code:
type User struct {
ID int64 `json:"id"`
Email string `json:"email"`
CreatedAt time.Time `json:"created_at"`
}
func (q *Queries) GetUser(ctx context.Context, id int64) (User, error)
func (q *Queries) ListUsers(ctx context.Context, limit int32) ([]User, error)
func (q *Queries) CreateUser(ctx context.Context, email string) error
func (q *Queries) UpdateEmail(ctx context.Context, arg UpdateEmailParams) error
The struct fields and function signatures match your SQL exactly. Compiler catches type mismatches between Go code and database schema.
This matters because database errors at runtime are expensive. A renamed column, changed type, or missing field becomes a compile error instead of a 3 AM page. The feedback loop tightens from "deploy and hope" to "won't build."
Wrapper functions
I wrap generated functions in pgdb/queries.go to give
application code a stable API and to clean up sqlc's generated
parameter shapes. Three rules:
1. Pass through Row return types. sqlc's generated row structs
(UserRow, OrderRow) carry meaningful field names; nothing to
clean up. Wrappers return them as-is.
2. Pass through Params with meaningful field names. When the
SQL has clear named columns and :exec/:one produces a Params
struct like InsertInteractionParams{UserID, Channel, ...}, pass
the struct through. Callers get self-documenting named fields:
func (db *DB) InsertInteraction(ctx context.Context, arg InsertInteractionParams) error {
return db.Queries().InsertInteraction(ctx, arg)
}
3. Build Params internally when sqlc generates Column1/Column2/...
Postgres expressions like CASE, jsonb_build_object, and
split_part produce opaque column names in sqlc's Params. Don't
inflict those on business logic. Accept clean Go types and
build the struct inside the wrapper:
func (db *DB) UpsertContactByEmail(ctx context.Context, email, name string) error {
return db.Queries().UpsertContactByEmail(ctx,
UpsertContactByEmailParams{
Column1: email,
Column2: name,
})
}
Application code calls db.UpsertContactByEmail(ctx, "a@b", "Ada"),
not db.Queries().UpsertContactByEmail(ctx,
UpsertContactByEmailParams{Column1: "a@b", Column2: "Ada"}).
The wrapper can also add logging, metrics, or pgtype conversions without changing callers.
Schema changes
When the schema changes, update two places:
- Migrations (
db/migrate/): DDL changes - Queries (
db/queries/): if column names changed
Then run sqlc generate. The compiler shows what Go code needs updating.
This centralization pays off for refactors. Renaming a table:
-- db/migrate/20240101_rename_table.sql
ALTER TABLE user_accounts RENAME TO users;
Update table name in db/queries/*.sql, run sqlc generate.
Application code calling the wrapper functions doesn't change at all.
Without sqlc, you'd grep for raw SQL strings scattered across the codebase.
When to use raw SQL
sqlc handles most queries: joins, CTEs, window functions, aggregations, upserts. If Postgres can parse it, sqlc can too.
The exception is queries whose structure isn't known at compile time:
- Search with user-selected filter combinations
- Dynamic ORDER BY from query parameters
- One-off admin scripts not worth committing
For everything else, sqlc wins. Centralizing SQL in db/queries/
means schema refactors touch one directory instead of grepping the codebase.
Batch inserts
In 2021, Brandur wrote about going all in on sqlc
and noted that batch inserts required an unnest workaround
since sqlc couldn't handle variadic parameters.
sqlc now supports :copyfrom, which uses PostgreSQL's COPY protocol:
-- name: CreateUsers :copyfrom
INSERT INTO users (name, email)
VALUES ($1, $2);
The generated function accepts a slice:
func (q *Queries) CreateUsers(ctx context.Context, arg []CreateUsersParams) (int64, error)
COPY is significantly faster than individual INSERTs for bulk data. No workarounds needed.
Real Postgres parser
sqlc uses pg_query_go, which embeds the same parser that Postgres itself uses. This means complex queries with CTEs, window functions, and Postgres-specific syntax just work.
A hand-rolled parser would struggle with edge cases. Using the real thing means sqlc can parse anything Postgres can.
Codegen speed
sqlc generate runs fast. Under 100ms for a typical project.
The development loop stays tight:
$ time sqlc generate
real 0.05s
user 0.06s
sys 0.01s
Fast enough to run on every save if you want.
CI verification
Add a GitHub Action to verify generated code stays in sync:
- name: Verify sqlc
run: |
sqlc generate
git diff --exit-code pgdb/
This fails if someone edits queries but forgets to regenerate. Catches schema drift before it hits main.
Comparison with ORMs
ORMs hide SQL behind method chains:
db.Where("email = ?", email).First(&user)
sqlc keeps SQL visible:
-- name: GetUserByEmail :one
SELECT * FROM users WHERE email = $1;
I prefer seeing the actual query. Easier to optimize, easier to run in psql, easier to explain to the database.
Project layout
Two top-level directories:
db/
├── migrate/ # DDL (CREATE TABLE, ALTER, etc.)
│ ├── 20260101000001_create_users.up.sql
│ └── 20260101000001_create_users.down.sql
├── queries/ # DML (SELECT, INSERT, UPDATE, DELETE)
│ └── users.sql
├── schema.sql # pg_dump output, checked in for review
└── schema_go.sql # (live-DB mode only) sqlc input regenerated post-migration
pgdb/
├── conn.go # Connection pool, DB struct
├── queries.go # Hand-written wrappers (stable API)
├── models.go # Generated row structs
└── users.sql.go # Generated query functions
sqlc.yaml
db/holds SQL artifacts (migrations, queries, dumped schema) that humans write or pg_dump produces.pgdb/holds Go: hand-written connection setup and wrappers, plus sqlc-generated query code and row models.
Migrations run once per deploy; queries run constantly. The
lifecycle split matches the directory split. The generated Go in
pgdb/ is checked in alongside the hand-written wrappers: same
package, same compile unit, same reviewer flow.