go / gzip
I use a simple middleware to gzip HTTP responses,
with a sync.Pool to reuse gzip writers.
The pattern
package gzip
import (
"compress/gzip"
"io"
"net/http"
"strings"
"sync"
)
var pool = sync.Pool{
New: func() any {
w, _ := gzip.NewWriterLevel(nil, gzip.BestSpeed)
return w
},
}
func Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
next.ServeHTTP(w, r)
return
}
w.Header().Set("Content-Encoding", "gzip")
w.Header().Add("Vary", "Accept-Encoding")
gz := pool.Get().(*gzip.Writer)
gz.Reset(w)
defer func() {
gz.Close()
pool.Put(gz)
}()
next.ServeHTTP(&gzipResponseWriter{gz: gz, ResponseWriter: w}, r)
})
}
type gzipResponseWriter struct {
gz *gzip.Writer
http.ResponseWriter
}
func (w *gzipResponseWriter) Write(b []byte) (int, error) {
return w.gz.Write(b)
}
Key points:
- Pooled writers:
gzip.NewWriterallocates ~256KB for compression tables. Reusing writers viasync.Poolreduces allocations significantly. See pool for when you need handle-based removal instead. - BestSpeed: Level 1 compression is usually the right trade-off for HTTP. CPU cost is low, and most compressible content (JSON, HTML) compresses well even at low levels.
- Vary header: Tells caches the response varies by
Accept-Encoding.
Usage
mux := http.NewServeMux()
mux.HandleFunc("/api/data", handleData)
http.ListenAndServe(":8080", gzip.Middleware(mux))
When to use
- JSON APIs with large responses
- HTML pages
- Any text-based content (CSS, JavaScript, XML)
When not to use
- Already-compressed content (images, video, zip files)
- Very small responses (gzip overhead exceeds savings under ~150 bytes)
- WebSocket connections
- Server-sent events (streaming)
Streaming caveat
This middleware buffers the entire response before sending.
For streaming responses, you'd need to call gz.Flush() periodically,
but that reduces compression ratio.