Slow SQL queries are one of the most common causes of application performance problems. Before spending money on more hardware, these 10 optimization techniques will typically get you 10–1,000x improvements on the queries that matter most.

1

Use EXPLAIN ANALYZE Before Optimizing Anything Foundation

You cannot optimize what you haven't measured. EXPLAIN ANALYZE shows you the actual execution plan — what the database is really doing, not what you think it's doing.

-- Always start here. ANALYZE actually runs the query. EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT u.name, COUNT(o.id) AS order_count FROM users u LEFT JOIN orders o ON o.user_id = u.id WHERE u.created_at > '2025-01-01' GROUP BY u.id, u.name ORDER BY order_count DESC LIMIT 50; -- Key things to look for: -- "Seq Scan" on large tables = missing index -- "Nested Loop" with high rows = bad join order or missing index -- "Sort" with large buffers = can you avoid the sort with an index? -- Actual rows >> Estimated rows = stale statistics (run ANALYZE)
2

Select Only Columns You Need High Impact

SELECT * is fine for ad-hoc queries but destructive in production code. Fetching unnecessary columns wastes network bandwidth, memory, and can prevent index-only scans.

-- Slow: fetches all columns including blobs/JSON SELECT * FROM users WHERE status = 'active'; -- Fast: only what the UI actually needs SELECT id, name, email, created_at FROM users WHERE status = 'active';
3

Avoid Functions on Indexed Columns in WHERE Clauses High Impact

Wrapping a column in a function in a WHERE clause usually prevents index use — the database must evaluate the function for every row.

-- Slow: function on column prevents index use SELECT * FROM orders WHERE DATE(created_at) = '2026-03-01'; -- Full scan! -- Fast: range query uses index SELECT * FROM orders WHERE created_at >= '2026-03-01' AND created_at < '2026-03-02'; -- Index scan! -- Slow: LOWER on email prevents index WHERE LOWER(email) = 'user@example.com'; -- Fast: use a functional index instead CREATE INDEX idx_users_email_lower ON users (LOWER(email)); WHERE LOWER(email) = 'user@example.com'; -- Now uses the functional index!
4

Use CTEs for Readability, NOT Performance Context-Dependent

In PostgreSQL 12+, CTEs are inlined by default (no longer materialized), so they don't automatically create performance boundaries. Use them for readability. For repeated use of a subquery result, MATERIALIZED CTEs can help.

-- PostgreSQL 12+: CTE is inlined (optimizer sees through it) WITH active_users AS ( SELECT id, name FROM users WHERE status = 'active' ) SELECT au.name, COUNT(o.id) FROM active_users au LEFT JOIN orders o ON o.user_id = au.id GROUP BY au.id, au.name; -- Force materialization when you WANT the CTE result cached: WITH expensive_subquery AS MATERIALIZED ( SELECT user_id, SUM(amount) AS total FROM transactions GROUP BY user_id ) SELECT * FROM expensive_subquery WHERE total > 10000;
5

Replace Correlated Subqueries with JOINs High Impact

-- Slow: correlated subquery runs for EVERY row SELECT u.name, (SELECT COUNT(*) FROM orders o WHERE o.user_id = u.id) AS order_count FROM users u; -- O(n) subquery executions -- Fast: single JOIN + aggregate SELECT u.name, COUNT(o.id) AS order_count FROM users u LEFT JOIN orders o ON o.user_id = u.id GROUP BY u.id, u.name; -- O(1) operation
6

Use EXISTS Instead of COUNT for Existence Checks High Impact

-- Slow: counts all matching rows even after first match SELECT * FROM users u WHERE (SELECT COUNT(*) FROM orders o WHERE o.user_id = u.id) > 0; -- Fast: stops at first match SELECT * FROM users u WHERE EXISTS (SELECT 1 FROM orders o WHERE o.user_id = u.id);
7

Paginate with Keyset Pagination, Not OFFSET Critical for Large Tables

-- Slow OFFSET-based pagination (gets worse as OFFSET increases) SELECT * FROM posts ORDER BY created_at DESC LIMIT 20 OFFSET 10000; -- Must read 10,020 rows and discard the first 10,000! -- Fast keyset pagination (cursor-based) -- First page: SELECT * FROM posts ORDER BY created_at DESC, id DESC LIMIT 20; -- Next page (using last row's values as cursor): SELECT * FROM posts WHERE (created_at, id) < ('2026-01-15 12:34:56', 98765) ORDER BY created_at DESC, id DESC LIMIT 20; -- Always reads exactly 20 rows regardless of "page number"!
8

Batch Large INSERT/UPDATE Operations High Impact for Writes

-- Slow: individual inserts (each commits separately) INSERT INTO events (user_id, event_type) VALUES (1, 'click'); INSERT INTO events (user_id, event_type) VALUES (2, 'view'); -- ... 10,000 times -- Fast: batch insert (PostgreSQL) INSERT INTO events (user_id, event_type) VALUES (1, 'click'), (2, 'view'), (3, 'purchase') -- ... up to 1,000 rows per batch ON CONFLICT (user_id, event_type) DO NOTHING;
9

Use Connection Pooling — Always Infrastructure

Establishing a new database connection takes 30–200ms. For a web application handling 100 requests/second, this alone creates 3–20 seconds of overhead per second. Connection poolers (PgBouncer for PostgreSQL, ProxySQL for MySQL) maintain a pool of open connections and reuse them.

PgBouncer Configuration Tip

Set pool_mode = transaction for maximum connection efficiency. In transaction pooling mode, a connection is returned to the pool after each transaction — not each session. This allows 10,000 application connections to share 50 database connections.

10

Keep Statistics Fresh with Regular ANALYZE Maintenance

PostgreSQL's query planner uses statistics to choose execution plans. Stale statistics lead to bad estimates and poor query plans. PostgreSQL's autovacuum runs ANALYZE automatically, but on rapidly-changing tables, you may need to tune autovacuum settings or run ANALYZE manually.

-- Force statistics update on a specific table ANALYZE VERBOSE orders; -- Check when statistics were last updated SELECT schemaname, tablename, last_analyze, last_autoanalyze, n_live_tup, n_dead_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC; -- Increase statistics target for columns used in range queries ALTER TABLE orders ALTER COLUMN created_at SET STATISTICS 500; -- Default is 100; higher = better estimates